00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 977 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3644 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.044 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.062 Fetching changes from the remote Git repository 00:00:00.066 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.090 Using shallow fetch with depth 1 00:00:00.090 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.090 > git --version # timeout=10 00:00:00.118 > git --version # 'git version 2.39.2' 00:00:00.118 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.146 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.146 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.361 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.370 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.381 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.381 > git config core.sparsecheckout # timeout=10 00:00:03.390 > git read-tree -mu HEAD # timeout=10 00:00:03.404 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.421 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.421 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.517 [Pipeline] Start of Pipeline 00:00:03.527 [Pipeline] library 00:00:03.528 Loading library shm_lib@master 00:00:03.528 Library shm_lib@master is cached. Copying from home. 00:00:03.544 [Pipeline] node 00:00:03.555 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:03.556 [Pipeline] { 00:00:03.564 [Pipeline] catchError 00:00:03.565 [Pipeline] { 00:00:03.577 [Pipeline] wrap 00:00:03.585 [Pipeline] { 00:00:03.590 [Pipeline] stage 00:00:03.591 [Pipeline] { (Prologue) 00:00:03.778 [Pipeline] sh 00:00:04.062 + logger -p user.info -t JENKINS-CI 00:00:04.079 [Pipeline] echo 00:00:04.081 Node: WFP21 00:00:04.088 [Pipeline] sh 00:00:04.384 [Pipeline] setCustomBuildProperty 00:00:04.396 [Pipeline] echo 00:00:04.398 Cleanup processes 00:00:04.403 [Pipeline] sh 00:00:04.687 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.687 1530873 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.700 [Pipeline] sh 00:00:04.983 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.983 ++ grep -v 'sudo pgrep' 00:00:04.983 ++ awk '{print $1}' 00:00:04.983 + sudo kill -9 00:00:04.983 + true 00:00:04.995 [Pipeline] cleanWs 00:00:05.004 [WS-CLEANUP] Deleting project workspace... 00:00:05.004 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.009 [WS-CLEANUP] done 00:00:05.014 [Pipeline] setCustomBuildProperty 00:00:05.027 [Pipeline] sh 00:00:05.310 + sudo git config --global --replace-all safe.directory '*' 00:00:05.400 [Pipeline] httpRequest 00:00:06.140 [Pipeline] echo 00:00:06.141 Sorcerer 10.211.164.20 is alive 00:00:06.148 [Pipeline] retry 00:00:06.149 [Pipeline] { 00:00:06.163 [Pipeline] httpRequest 00:00:06.167 HttpMethod: GET 00:00:06.168 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.169 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.193 Response Code: HTTP/1.1 200 OK 00:00:06.193 Success: Status code 200 is in the accepted range: 200,404 00:00:06.194 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.880 [Pipeline] } 00:00:28.897 [Pipeline] // retry 00:00:28.905 [Pipeline] sh 00:00:29.191 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.208 [Pipeline] httpRequest 00:00:29.636 [Pipeline] echo 00:00:29.637 Sorcerer 10.211.164.20 is alive 00:00:29.646 [Pipeline] retry 00:00:29.648 [Pipeline] { 00:00:29.662 [Pipeline] httpRequest 00:00:29.666 HttpMethod: GET 00:00:29.666 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:29.667 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:29.684 Response Code: HTTP/1.1 200 OK 00:00:29.685 Success: Status code 200 is in the accepted range: 200,404 00:00:29.685 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:27.502 [Pipeline] } 00:01:27.519 [Pipeline] // retry 00:01:27.526 [Pipeline] sh 00:01:27.812 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:30.359 [Pipeline] sh 00:01:30.644 + git -C spdk log --oneline -n5 00:01:30.644 c13c99a5e test: Various fixes for Fedora40 00:01:30.644 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:30.644 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:30.644 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:30.644 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:30.662 [Pipeline] withCredentials 00:01:30.673 > git --version # timeout=10 00:01:30.686 > git --version # 'git version 2.39.2' 00:01:30.704 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:30.706 [Pipeline] { 00:01:30.715 [Pipeline] retry 00:01:30.717 [Pipeline] { 00:01:30.733 [Pipeline] sh 00:01:31.018 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:31.030 [Pipeline] } 00:01:31.047 [Pipeline] // retry 00:01:31.052 [Pipeline] } 00:01:31.068 [Pipeline] // withCredentials 00:01:31.078 [Pipeline] httpRequest 00:01:31.715 [Pipeline] echo 00:01:31.717 Sorcerer 10.211.164.20 is alive 00:01:31.728 [Pipeline] retry 00:01:31.731 [Pipeline] { 00:01:31.747 [Pipeline] httpRequest 00:01:31.751 HttpMethod: GET 00:01:31.752 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.752 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.764 Response Code: HTTP/1.1 200 OK 00:01:31.765 Success: Status code 200 is in the accepted range: 200,404 00:01:31.765 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:37.765 [Pipeline] } 00:01:37.782 [Pipeline] // retry 00:01:37.790 [Pipeline] sh 00:01:38.077 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:39.470 [Pipeline] sh 00:01:39.757 + git -C dpdk log --oneline -n5 00:01:39.757 eeb0605f11 version: 23.11.0 00:01:39.757 238778122a doc: update release notes for 23.11 00:01:39.757 46aa6b3cfc doc: fix description of RSS features 00:01:39.757 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:39.757 7e421ae345 devtools: support skipping forbid rule check 00:01:39.767 [Pipeline] } 00:01:39.782 [Pipeline] // stage 00:01:39.792 [Pipeline] stage 00:01:39.794 [Pipeline] { (Prepare) 00:01:39.815 [Pipeline] writeFile 00:01:39.831 [Pipeline] sh 00:01:40.116 + logger -p user.info -t JENKINS-CI 00:01:40.129 [Pipeline] sh 00:01:40.414 + logger -p user.info -t JENKINS-CI 00:01:40.426 [Pipeline] sh 00:01:40.711 + cat autorun-spdk.conf 00:01:40.711 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.711 SPDK_TEST_NVMF=1 00:01:40.711 SPDK_TEST_NVME_CLI=1 00:01:40.711 SPDK_TEST_NVMF_NICS=mlx5 00:01:40.711 SPDK_RUN_UBSAN=1 00:01:40.711 NET_TYPE=phy 00:01:40.711 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:40.711 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.718 RUN_NIGHTLY=1 00:01:40.724 [Pipeline] readFile 00:01:40.751 [Pipeline] withEnv 00:01:40.754 [Pipeline] { 00:01:40.766 [Pipeline] sh 00:01:41.052 + set -ex 00:01:41.052 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:41.052 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:41.052 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.052 ++ SPDK_TEST_NVMF=1 00:01:41.052 ++ SPDK_TEST_NVME_CLI=1 00:01:41.052 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:41.052 ++ SPDK_RUN_UBSAN=1 00:01:41.052 ++ NET_TYPE=phy 00:01:41.052 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.052 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.052 ++ RUN_NIGHTLY=1 00:01:41.052 + case $SPDK_TEST_NVMF_NICS in 00:01:41.052 + DRIVERS=mlx5_ib 00:01:41.052 + [[ -n mlx5_ib ]] 00:01:41.052 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:41.052 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:47.626 rmmod: ERROR: Module irdma is not currently loaded 00:01:47.626 rmmod: ERROR: Module i40iw is not currently loaded 00:01:47.626 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:47.626 + true 00:01:47.626 + for D in $DRIVERS 00:01:47.626 + sudo modprobe mlx5_ib 00:01:47.626 + exit 0 00:01:47.636 [Pipeline] } 00:01:47.655 [Pipeline] // withEnv 00:01:47.661 [Pipeline] } 00:01:47.674 [Pipeline] // stage 00:01:47.684 [Pipeline] catchError 00:01:47.685 [Pipeline] { 00:01:47.699 [Pipeline] timeout 00:01:47.699 Timeout set to expire in 1 hr 0 min 00:01:47.701 [Pipeline] { 00:01:47.715 [Pipeline] stage 00:01:47.717 [Pipeline] { (Tests) 00:01:47.745 [Pipeline] sh 00:01:48.033 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.033 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.033 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:48.033 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:48.033 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:48.033 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:48.033 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:48.033 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:48.033 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:48.033 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:48.033 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:48.033 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.033 + source /etc/os-release 00:01:48.033 ++ NAME='Fedora Linux' 00:01:48.033 ++ VERSION='39 (Cloud Edition)' 00:01:48.033 ++ ID=fedora 00:01:48.033 ++ VERSION_ID=39 00:01:48.033 ++ VERSION_CODENAME= 00:01:48.033 ++ PLATFORM_ID=platform:f39 00:01:48.033 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:48.033 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:48.033 ++ LOGO=fedora-logo-icon 00:01:48.033 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:48.033 ++ HOME_URL=https://fedoraproject.org/ 00:01:48.033 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:48.033 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:48.033 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:48.033 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:48.033 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:48.033 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:48.033 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:48.033 ++ SUPPORT_END=2024-11-12 00:01:48.033 ++ VARIANT='Cloud Edition' 00:01:48.033 ++ VARIANT_ID=cloud 00:01:48.033 + uname -a 00:01:48.033 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:48.034 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:50.572 Hugepages 00:01:50.572 node hugesize free / total 00:01:50.572 node0 1048576kB 0 / 0 00:01:50.572 node0 2048kB 0 / 0 00:01:50.572 node1 1048576kB 0 / 0 00:01:50.572 node1 2048kB 0 / 0 00:01:50.572 00:01:50.572 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:50.572 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:50.572 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:50.572 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:50.572 + rm -f /tmp/spdk-ld-path 00:01:50.572 + source autorun-spdk.conf 00:01:50.572 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.572 ++ SPDK_TEST_NVMF=1 00:01:50.572 ++ SPDK_TEST_NVME_CLI=1 00:01:50.572 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:50.572 ++ SPDK_RUN_UBSAN=1 00:01:50.572 ++ NET_TYPE=phy 00:01:50.572 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:50.572 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:50.572 ++ RUN_NIGHTLY=1 00:01:50.572 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:50.572 + [[ -n '' ]] 00:01:50.572 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:50.572 + for M in /var/spdk/build-*-manifest.txt 00:01:50.572 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:50.572 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:50.572 + for M in /var/spdk/build-*-manifest.txt 00:01:50.572 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:50.572 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:50.572 + for M in /var/spdk/build-*-manifest.txt 00:01:50.572 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:50.572 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:50.572 ++ uname 00:01:50.572 + [[ Linux == \L\i\n\u\x ]] 00:01:50.572 + sudo dmesg -T 00:01:50.572 + sudo dmesg --clear 00:01:50.572 + dmesg_pid=1532384 00:01:50.572 + [[ Fedora Linux == FreeBSD ]] 00:01:50.572 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.572 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.572 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:50.572 + [[ -x /usr/src/fio-static/fio ]] 00:01:50.572 + export FIO_BIN=/usr/src/fio-static/fio 00:01:50.572 + FIO_BIN=/usr/src/fio-static/fio 00:01:50.572 + sudo dmesg -Tw 00:01:50.572 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:50.572 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:50.572 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:50.572 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.572 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.572 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:50.572 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.572 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.572 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:50.572 Test configuration: 00:01:50.572 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.572 SPDK_TEST_NVMF=1 00:01:50.572 SPDK_TEST_NVME_CLI=1 00:01:50.572 SPDK_TEST_NVMF_NICS=mlx5 00:01:50.572 SPDK_RUN_UBSAN=1 00:01:50.572 NET_TYPE=phy 00:01:50.572 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:50.572 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:50.572 RUN_NIGHTLY=1 05:04:07 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:50.572 05:04:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:50.572 05:04:07 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:50.572 05:04:07 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:50.572 05:04:07 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:50.572 05:04:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.572 05:04:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.572 05:04:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.572 05:04:07 -- paths/export.sh@5 -- $ export PATH 00:01:50.572 05:04:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.572 05:04:07 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:50.572 05:04:07 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:50.572 05:04:07 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731989047.XXXXXX 00:01:50.572 05:04:07 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731989047.UF642I 00:01:50.572 05:04:07 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:50.572 05:04:07 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:01:50.572 05:04:07 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:50.572 05:04:07 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:50.572 05:04:07 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:50.572 05:04:07 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:50.572 05:04:07 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:50.572 05:04:07 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:50.572 05:04:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.573 05:04:07 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:50.573 05:04:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.573 05:04:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.573 05:04:07 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:50.573 05:04:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.573 Tue Nov 19 04:04:07 AM UTC 2024 00:01:50.573 05:04:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.573 LTS-67-gc13c99a5e 00:01:50.573 05:04:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:50.573 05:04:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.573 05:04:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.573 05:04:07 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:50.573 05:04:07 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:50.573 05:04:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.573 ************************************ 00:01:50.573 START TEST ubsan 00:01:50.573 ************************************ 00:01:50.573 05:04:07 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:50.573 using ubsan 00:01:50.573 00:01:50.573 real 0m0.000s 00:01:50.573 user 0m0.000s 00:01:50.573 sys 0m0.000s 00:01:50.573 05:04:07 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:50.573 05:04:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.573 ************************************ 00:01:50.573 END TEST ubsan 00:01:50.573 ************************************ 00:01:50.832 05:04:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:50.832 05:04:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:50.832 05:04:07 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:50.832 05:04:07 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:50.832 05:04:07 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:50.832 05:04:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.832 ************************************ 00:01:50.832 START TEST build_native_dpdk 00:01:50.832 ************************************ 00:01:50.832 05:04:07 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:50.832 05:04:07 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:50.832 05:04:07 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:50.832 05:04:07 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:50.832 05:04:07 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:50.832 05:04:07 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:50.832 05:04:07 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:50.832 05:04:07 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:50.832 05:04:07 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:50.832 05:04:07 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:50.832 05:04:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:50.832 05:04:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:50.832 05:04:07 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:50.832 05:04:07 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:50.832 05:04:07 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:50.833 05:04:07 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:50.833 05:04:07 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:50.833 05:04:07 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:50.833 05:04:07 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:50.833 05:04:07 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:50.833 eeb0605f11 version: 23.11.0 00:01:50.833 238778122a doc: update release notes for 23.11 00:01:50.833 46aa6b3cfc doc: fix description of RSS features 00:01:50.833 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:50.833 7e421ae345 devtools: support skipping forbid rule check 00:01:50.833 05:04:07 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:50.833 05:04:07 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:50.833 05:04:07 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:50.833 05:04:07 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:50.833 05:04:07 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:50.833 05:04:07 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:50.833 05:04:07 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:50.833 05:04:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:50.833 05:04:07 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:50.833 05:04:07 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:50.833 05:04:07 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:50.833 05:04:07 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:50.833 05:04:07 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:50.833 05:04:07 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:50.833 05:04:07 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:50.833 05:04:07 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:50.833 05:04:07 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:50.833 05:04:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.833 05:04:07 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:50.833 05:04:07 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:50.833 05:04:07 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:50.833 05:04:07 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:50.833 05:04:07 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:50.833 05:04:07 -- scripts/common.sh@343 -- $ case "$op" in 00:01:50.833 05:04:07 -- scripts/common.sh@344 -- $ : 1 00:01:50.833 05:04:07 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:50.833 05:04:07 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.833 05:04:07 -- scripts/common.sh@364 -- $ decimal 23 00:01:50.833 05:04:07 -- scripts/common.sh@352 -- $ local d=23 00:01:50.833 05:04:07 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:50.833 05:04:07 -- scripts/common.sh@354 -- $ echo 23 00:01:50.833 05:04:07 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:50.833 05:04:07 -- scripts/common.sh@365 -- $ decimal 21 00:01:50.833 05:04:07 -- scripts/common.sh@352 -- $ local d=21 00:01:50.833 05:04:07 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:50.833 05:04:07 -- scripts/common.sh@354 -- $ echo 21 00:01:50.833 05:04:07 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:50.833 05:04:07 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:50.833 05:04:07 -- scripts/common.sh@366 -- $ return 1 00:01:50.833 05:04:07 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:50.833 patching file config/rte_config.h 00:01:50.833 Hunk #1 succeeded at 60 (offset 1 line). 00:01:50.833 05:04:07 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:50.833 05:04:07 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:50.833 05:04:07 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:50.833 05:04:07 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:50.833 05:04:07 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:50.833 05:04:07 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:50.833 05:04:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.833 05:04:07 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:50.833 05:04:07 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:50.833 05:04:07 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:50.833 05:04:07 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:50.833 05:04:07 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:50.833 05:04:07 -- scripts/common.sh@343 -- $ case "$op" in 00:01:50.833 05:04:07 -- scripts/common.sh@344 -- $ : 1 00:01:50.833 05:04:07 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:50.833 05:04:07 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.833 05:04:07 -- scripts/common.sh@364 -- $ decimal 23 00:01:50.833 05:04:07 -- scripts/common.sh@352 -- $ local d=23 00:01:50.833 05:04:07 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:50.833 05:04:07 -- scripts/common.sh@354 -- $ echo 23 00:01:50.833 05:04:07 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:50.833 05:04:07 -- scripts/common.sh@365 -- $ decimal 24 00:01:50.833 05:04:07 -- scripts/common.sh@352 -- $ local d=24 00:01:50.833 05:04:07 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:50.833 05:04:07 -- scripts/common.sh@354 -- $ echo 24 00:01:50.833 05:04:07 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:50.833 05:04:07 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:50.833 05:04:07 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:50.833 05:04:07 -- scripts/common.sh@367 -- $ return 0 00:01:50.833 05:04:07 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:50.833 patching file lib/pcapng/rte_pcapng.c 00:01:50.833 05:04:07 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:50.833 05:04:07 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:50.833 05:04:07 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:50.833 05:04:07 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:50.833 05:04:07 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:56.110 The Meson build system 00:01:56.110 Version: 1.5.0 00:01:56.110 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:56.110 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:56.110 Build type: native build 00:01:56.110 Program cat found: YES (/usr/bin/cat) 00:01:56.110 Project name: DPDK 00:01:56.111 Project version: 23.11.0 00:01:56.111 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.111 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:56.111 Host machine cpu family: x86_64 00:01:56.111 Host machine cpu: x86_64 00:01:56.111 Message: ## Building in Developer Mode ## 00:01:56.111 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.111 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:56.111 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.111 Program python3 found: YES (/usr/bin/python3) 00:01:56.111 Program cat found: YES (/usr/bin/cat) 00:01:56.111 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:56.111 Compiler for C supports arguments -march=native: YES 00:01:56.111 Checking for size of "void *" : 8 00:01:56.111 Checking for size of "void *" : 8 (cached) 00:01:56.111 Library m found: YES 00:01:56.111 Library numa found: YES 00:01:56.111 Has header "numaif.h" : YES 00:01:56.111 Library fdt found: NO 00:01:56.111 Library execinfo found: NO 00:01:56.111 Has header "execinfo.h" : YES 00:01:56.111 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.111 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.111 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.111 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.111 Run-time dependency openssl found: YES 3.1.1 00:01:56.111 Run-time dependency libpcap found: YES 1.10.4 00:01:56.111 Has header "pcap.h" with dependency libpcap: YES 00:01:56.111 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.111 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.111 Compiler for C supports arguments -Wformat: YES 00:01:56.111 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.111 Compiler for C supports arguments -Wformat-security: NO 00:01:56.111 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.111 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.111 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.111 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.111 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.111 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.111 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.111 Compiler for C supports arguments -Wundef: YES 00:01:56.111 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.111 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.111 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.111 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.111 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.111 Program objdump found: YES (/usr/bin/objdump) 00:01:56.111 Compiler for C supports arguments -mavx512f: YES 00:01:56.111 Checking if "AVX512 checking" compiles: YES 00:01:56.111 Fetching value of define "__SSE4_2__" : 1 00:01:56.111 Fetching value of define "__AES__" : 1 00:01:56.111 Fetching value of define "__AVX__" : 1 00:01:56.111 Fetching value of define "__AVX2__" : 1 00:01:56.111 Fetching value of define "__AVX512BW__" : 1 00:01:56.111 Fetching value of define "__AVX512CD__" : 1 00:01:56.111 Fetching value of define "__AVX512DQ__" : 1 00:01:56.111 Fetching value of define "__AVX512F__" : 1 00:01:56.111 Fetching value of define "__AVX512VL__" : 1 00:01:56.111 Fetching value of define "__PCLMUL__" : 1 00:01:56.111 Fetching value of define "__RDRND__" : 1 00:01:56.111 Fetching value of define "__RDSEED__" : 1 00:01:56.111 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.111 Fetching value of define "__znver1__" : (undefined) 00:01:56.111 Fetching value of define "__znver2__" : (undefined) 00:01:56.111 Fetching value of define "__znver3__" : (undefined) 00:01:56.111 Fetching value of define "__znver4__" : (undefined) 00:01:56.111 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.111 Message: lib/log: Defining dependency "log" 00:01:56.111 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.111 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.111 Checking for function "getentropy" : NO 00:01:56.111 Message: lib/eal: Defining dependency "eal" 00:01:56.111 Message: lib/ring: Defining dependency "ring" 00:01:56.111 Message: lib/rcu: Defining dependency "rcu" 00:01:56.111 Message: lib/mempool: Defining dependency "mempool" 00:01:56.111 Message: lib/mbuf: Defining dependency "mbuf" 00:01:56.111 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:56.111 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:56.111 Compiler for C supports arguments -mpclmul: YES 00:01:56.111 Compiler for C supports arguments -maes: YES 00:01:56.111 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.111 Compiler for C supports arguments -mavx512bw: YES 00:01:56.111 Compiler for C supports arguments -mavx512dq: YES 00:01:56.111 Compiler for C supports arguments -mavx512vl: YES 00:01:56.111 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:56.111 Compiler for C supports arguments -mavx2: YES 00:01:56.111 Compiler for C supports arguments -mavx: YES 00:01:56.111 Message: lib/net: Defining dependency "net" 00:01:56.111 Message: lib/meter: Defining dependency "meter" 00:01:56.111 Message: lib/ethdev: Defining dependency "ethdev" 00:01:56.111 Message: lib/pci: Defining dependency "pci" 00:01:56.111 Message: lib/cmdline: Defining dependency "cmdline" 00:01:56.111 Message: lib/metrics: Defining dependency "metrics" 00:01:56.111 Message: lib/hash: Defining dependency "hash" 00:01:56.111 Message: lib/timer: Defining dependency "timer" 00:01:56.111 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:56.111 Message: lib/acl: Defining dependency "acl" 00:01:56.111 Message: lib/bbdev: Defining dependency "bbdev" 00:01:56.111 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:56.111 Run-time dependency libelf found: YES 0.191 00:01:56.111 Message: lib/bpf: Defining dependency "bpf" 00:01:56.111 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:56.111 Message: lib/compressdev: Defining dependency "compressdev" 00:01:56.111 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:56.111 Message: lib/distributor: Defining dependency "distributor" 00:01:56.111 Message: lib/dmadev: Defining dependency "dmadev" 00:01:56.111 Message: lib/efd: Defining dependency "efd" 00:01:56.111 Message: lib/eventdev: Defining dependency "eventdev" 00:01:56.111 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:56.111 Message: lib/gpudev: Defining dependency "gpudev" 00:01:56.111 Message: lib/gro: Defining dependency "gro" 00:01:56.111 Message: lib/gso: Defining dependency "gso" 00:01:56.111 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:56.111 Message: lib/jobstats: Defining dependency "jobstats" 00:01:56.111 Message: lib/latencystats: Defining dependency "latencystats" 00:01:56.111 Message: lib/lpm: Defining dependency "lpm" 00:01:56.111 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:56.111 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:56.111 Message: lib/member: Defining dependency "member" 00:01:56.111 Message: lib/pcapng: Defining dependency "pcapng" 00:01:56.111 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:56.111 Message: lib/power: Defining dependency "power" 00:01:56.111 Message: lib/rawdev: Defining dependency "rawdev" 00:01:56.111 Message: lib/regexdev: Defining dependency "regexdev" 00:01:56.111 Message: lib/mldev: Defining dependency "mldev" 00:01:56.111 Message: lib/rib: Defining dependency "rib" 00:01:56.111 Message: lib/reorder: Defining dependency "reorder" 00:01:56.111 Message: lib/sched: Defining dependency "sched" 00:01:56.111 Message: lib/security: Defining dependency "security" 00:01:56.111 Message: lib/stack: Defining dependency "stack" 00:01:56.111 Has header "linux/userfaultfd.h" : YES 00:01:56.111 Has header "linux/vduse.h" : YES 00:01:56.111 Message: lib/vhost: Defining dependency "vhost" 00:01:56.111 Message: lib/ipsec: Defining dependency "ipsec" 00:01:56.111 Message: lib/pdcp: Defining dependency "pdcp" 00:01:56.111 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:56.111 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:56.111 Message: lib/fib: Defining dependency "fib" 00:01:56.111 Message: lib/port: Defining dependency "port" 00:01:56.111 Message: lib/pdump: Defining dependency "pdump" 00:01:56.111 Message: lib/table: Defining dependency "table" 00:01:56.111 Message: lib/pipeline: Defining dependency "pipeline" 00:01:56.111 Message: lib/graph: Defining dependency "graph" 00:01:56.111 Message: lib/node: Defining dependency "node" 00:01:56.111 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.071 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.071 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.071 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.071 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:57.071 Compiler for C supports arguments -Wno-unused-value: YES 00:01:57.071 Compiler for C supports arguments -Wno-format: YES 00:01:57.071 Compiler for C supports arguments -Wno-format-security: YES 00:01:57.071 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:57.071 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:57.071 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:57.071 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:57.071 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.071 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.071 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:57.071 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:57.071 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:57.071 Has header "sys/epoll.h" : YES 00:01:57.071 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.071 Configuring doxy-api-html.conf using configuration 00:01:57.071 Configuring doxy-api-man.conf using configuration 00:01:57.071 Program mandb found: YES (/usr/bin/mandb) 00:01:57.071 Program sphinx-build found: NO 00:01:57.071 Configuring rte_build_config.h using configuration 00:01:57.071 Message: 00:01:57.071 ================= 00:01:57.071 Applications Enabled 00:01:57.071 ================= 00:01:57.071 00:01:57.071 apps: 00:01:57.071 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:57.072 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:57.072 test-pmd, test-regex, test-sad, test-security-perf, 00:01:57.072 00:01:57.072 Message: 00:01:57.072 ================= 00:01:57.072 Libraries Enabled 00:01:57.072 ================= 00:01:57.072 00:01:57.072 libs: 00:01:57.072 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.072 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:57.072 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:57.072 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:57.072 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:57.072 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:57.072 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:57.072 00:01:57.072 00:01:57.072 Message: 00:01:57.072 =============== 00:01:57.072 Drivers Enabled 00:01:57.072 =============== 00:01:57.072 00:01:57.072 common: 00:01:57.072 00:01:57.072 bus: 00:01:57.072 pci, vdev, 00:01:57.072 mempool: 00:01:57.072 ring, 00:01:57.072 dma: 00:01:57.072 00:01:57.072 net: 00:01:57.072 i40e, 00:01:57.072 raw: 00:01:57.072 00:01:57.072 crypto: 00:01:57.072 00:01:57.072 compress: 00:01:57.072 00:01:57.072 regex: 00:01:57.072 00:01:57.072 ml: 00:01:57.072 00:01:57.072 vdpa: 00:01:57.072 00:01:57.072 event: 00:01:57.072 00:01:57.072 baseband: 00:01:57.072 00:01:57.072 gpu: 00:01:57.072 00:01:57.072 00:01:57.072 Message: 00:01:57.072 ================= 00:01:57.072 Content Skipped 00:01:57.072 ================= 00:01:57.072 00:01:57.072 apps: 00:01:57.072 00:01:57.072 libs: 00:01:57.072 00:01:57.072 drivers: 00:01:57.072 common/cpt: not in enabled drivers build config 00:01:57.072 common/dpaax: not in enabled drivers build config 00:01:57.072 common/iavf: not in enabled drivers build config 00:01:57.072 common/idpf: not in enabled drivers build config 00:01:57.072 common/mvep: not in enabled drivers build config 00:01:57.072 common/octeontx: not in enabled drivers build config 00:01:57.072 bus/auxiliary: not in enabled drivers build config 00:01:57.072 bus/cdx: not in enabled drivers build config 00:01:57.072 bus/dpaa: not in enabled drivers build config 00:01:57.072 bus/fslmc: not in enabled drivers build config 00:01:57.072 bus/ifpga: not in enabled drivers build config 00:01:57.072 bus/platform: not in enabled drivers build config 00:01:57.072 bus/vmbus: not in enabled drivers build config 00:01:57.072 common/cnxk: not in enabled drivers build config 00:01:57.072 common/mlx5: not in enabled drivers build config 00:01:57.072 common/nfp: not in enabled drivers build config 00:01:57.072 common/qat: not in enabled drivers build config 00:01:57.072 common/sfc_efx: not in enabled drivers build config 00:01:57.072 mempool/bucket: not in enabled drivers build config 00:01:57.072 mempool/cnxk: not in enabled drivers build config 00:01:57.072 mempool/dpaa: not in enabled drivers build config 00:01:57.072 mempool/dpaa2: not in enabled drivers build config 00:01:57.072 mempool/octeontx: not in enabled drivers build config 00:01:57.072 mempool/stack: not in enabled drivers build config 00:01:57.072 dma/cnxk: not in enabled drivers build config 00:01:57.072 dma/dpaa: not in enabled drivers build config 00:01:57.072 dma/dpaa2: not in enabled drivers build config 00:01:57.072 dma/hisilicon: not in enabled drivers build config 00:01:57.072 dma/idxd: not in enabled drivers build config 00:01:57.072 dma/ioat: not in enabled drivers build config 00:01:57.072 dma/skeleton: not in enabled drivers build config 00:01:57.072 net/af_packet: not in enabled drivers build config 00:01:57.072 net/af_xdp: not in enabled drivers build config 00:01:57.072 net/ark: not in enabled drivers build config 00:01:57.072 net/atlantic: not in enabled drivers build config 00:01:57.072 net/avp: not in enabled drivers build config 00:01:57.072 net/axgbe: not in enabled drivers build config 00:01:57.072 net/bnx2x: not in enabled drivers build config 00:01:57.072 net/bnxt: not in enabled drivers build config 00:01:57.072 net/bonding: not in enabled drivers build config 00:01:57.072 net/cnxk: not in enabled drivers build config 00:01:57.072 net/cpfl: not in enabled drivers build config 00:01:57.072 net/cxgbe: not in enabled drivers build config 00:01:57.072 net/dpaa: not in enabled drivers build config 00:01:57.072 net/dpaa2: not in enabled drivers build config 00:01:57.072 net/e1000: not in enabled drivers build config 00:01:57.072 net/ena: not in enabled drivers build config 00:01:57.072 net/enetc: not in enabled drivers build config 00:01:57.072 net/enetfec: not in enabled drivers build config 00:01:57.072 net/enic: not in enabled drivers build config 00:01:57.072 net/failsafe: not in enabled drivers build config 00:01:57.072 net/fm10k: not in enabled drivers build config 00:01:57.072 net/gve: not in enabled drivers build config 00:01:57.072 net/hinic: not in enabled drivers build config 00:01:57.072 net/hns3: not in enabled drivers build config 00:01:57.072 net/iavf: not in enabled drivers build config 00:01:57.072 net/ice: not in enabled drivers build config 00:01:57.072 net/idpf: not in enabled drivers build config 00:01:57.072 net/igc: not in enabled drivers build config 00:01:57.072 net/ionic: not in enabled drivers build config 00:01:57.072 net/ipn3ke: not in enabled drivers build config 00:01:57.072 net/ixgbe: not in enabled drivers build config 00:01:57.072 net/mana: not in enabled drivers build config 00:01:57.072 net/memif: not in enabled drivers build config 00:01:57.072 net/mlx4: not in enabled drivers build config 00:01:57.072 net/mlx5: not in enabled drivers build config 00:01:57.072 net/mvneta: not in enabled drivers build config 00:01:57.072 net/mvpp2: not in enabled drivers build config 00:01:57.072 net/netvsc: not in enabled drivers build config 00:01:57.072 net/nfb: not in enabled drivers build config 00:01:57.072 net/nfp: not in enabled drivers build config 00:01:57.072 net/ngbe: not in enabled drivers build config 00:01:57.072 net/null: not in enabled drivers build config 00:01:57.072 net/octeontx: not in enabled drivers build config 00:01:57.072 net/octeon_ep: not in enabled drivers build config 00:01:57.072 net/pcap: not in enabled drivers build config 00:01:57.072 net/pfe: not in enabled drivers build config 00:01:57.072 net/qede: not in enabled drivers build config 00:01:57.072 net/ring: not in enabled drivers build config 00:01:57.072 net/sfc: not in enabled drivers build config 00:01:57.072 net/softnic: not in enabled drivers build config 00:01:57.072 net/tap: not in enabled drivers build config 00:01:57.072 net/thunderx: not in enabled drivers build config 00:01:57.072 net/txgbe: not in enabled drivers build config 00:01:57.072 net/vdev_netvsc: not in enabled drivers build config 00:01:57.072 net/vhost: not in enabled drivers build config 00:01:57.072 net/virtio: not in enabled drivers build config 00:01:57.072 net/vmxnet3: not in enabled drivers build config 00:01:57.072 raw/cnxk_bphy: not in enabled drivers build config 00:01:57.072 raw/cnxk_gpio: not in enabled drivers build config 00:01:57.072 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:57.072 raw/ifpga: not in enabled drivers build config 00:01:57.072 raw/ntb: not in enabled drivers build config 00:01:57.072 raw/skeleton: not in enabled drivers build config 00:01:57.072 crypto/armv8: not in enabled drivers build config 00:01:57.072 crypto/bcmfs: not in enabled drivers build config 00:01:57.072 crypto/caam_jr: not in enabled drivers build config 00:01:57.072 crypto/ccp: not in enabled drivers build config 00:01:57.072 crypto/cnxk: not in enabled drivers build config 00:01:57.072 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.072 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.072 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.072 crypto/mlx5: not in enabled drivers build config 00:01:57.072 crypto/mvsam: not in enabled drivers build config 00:01:57.072 crypto/nitrox: not in enabled drivers build config 00:01:57.072 crypto/null: not in enabled drivers build config 00:01:57.072 crypto/octeontx: not in enabled drivers build config 00:01:57.072 crypto/openssl: not in enabled drivers build config 00:01:57.072 crypto/scheduler: not in enabled drivers build config 00:01:57.072 crypto/uadk: not in enabled drivers build config 00:01:57.072 crypto/virtio: not in enabled drivers build config 00:01:57.072 compress/isal: not in enabled drivers build config 00:01:57.072 compress/mlx5: not in enabled drivers build config 00:01:57.072 compress/octeontx: not in enabled drivers build config 00:01:57.072 compress/zlib: not in enabled drivers build config 00:01:57.072 regex/mlx5: not in enabled drivers build config 00:01:57.072 regex/cn9k: not in enabled drivers build config 00:01:57.072 ml/cnxk: not in enabled drivers build config 00:01:57.072 vdpa/ifc: not in enabled drivers build config 00:01:57.072 vdpa/mlx5: not in enabled drivers build config 00:01:57.072 vdpa/nfp: not in enabled drivers build config 00:01:57.072 vdpa/sfc: not in enabled drivers build config 00:01:57.072 event/cnxk: not in enabled drivers build config 00:01:57.072 event/dlb2: not in enabled drivers build config 00:01:57.072 event/dpaa: not in enabled drivers build config 00:01:57.072 event/dpaa2: not in enabled drivers build config 00:01:57.072 event/dsw: not in enabled drivers build config 00:01:57.072 event/opdl: not in enabled drivers build config 00:01:57.072 event/skeleton: not in enabled drivers build config 00:01:57.072 event/sw: not in enabled drivers build config 00:01:57.072 event/octeontx: not in enabled drivers build config 00:01:57.072 baseband/acc: not in enabled drivers build config 00:01:57.072 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:57.072 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:57.072 baseband/la12xx: not in enabled drivers build config 00:01:57.072 baseband/null: not in enabled drivers build config 00:01:57.072 baseband/turbo_sw: not in enabled drivers build config 00:01:57.072 gpu/cuda: not in enabled drivers build config 00:01:57.072 00:01:57.072 00:01:57.072 Build targets in project: 217 00:01:57.072 00:01:57.072 DPDK 23.11.0 00:01:57.072 00:01:57.073 User defined options 00:01:57.073 libdir : lib 00:01:57.073 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:57.073 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:57.073 c_link_args : 00:01:57.073 enable_docs : false 00:01:57.073 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:57.073 enable_kmods : false 00:01:57.073 machine : native 00:01:57.073 tests : false 00:01:57.073 00:01:57.073 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.073 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:57.073 05:04:13 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:01:57.073 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:57.073 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.348 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.348 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.348 [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.348 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.348 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.348 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.348 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.348 [9/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.348 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.348 [11/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.348 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.348 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.348 [14/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.348 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.348 [16/707] Linking static target lib/librte_kvargs.a 00:01:57.348 [17/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.348 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.348 [19/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.348 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.348 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.348 [22/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.348 [23/707] Linking static target lib/librte_pci.a 00:01:57.348 [24/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:57.348 [25/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.607 [26/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.607 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.607 [28/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.607 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.607 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:57.607 [31/707] Linking static target lib/librte_log.a 00:01:57.607 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:57.607 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.607 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:57.607 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.607 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:57.875 [37/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.875 [38/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.875 [39/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.875 [40/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.875 [41/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.875 [42/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.875 [43/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.875 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.875 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.875 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.875 [47/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.875 [48/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.875 [49/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.875 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.875 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.875 [52/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.875 [53/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.875 [54/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:57.875 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.875 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.875 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.875 [58/707] Linking static target lib/librte_meter.a 00:01:57.875 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.875 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.875 [61/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.875 [62/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.875 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.875 [64/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.875 [65/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:57.875 [66/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.875 [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.875 [68/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.875 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.144 [70/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.144 [71/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.144 [72/707] Linking static target lib/librte_cmdline.a 00:01:58.144 [73/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.144 [74/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.144 [75/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.144 [76/707] Linking static target lib/librte_ring.a 00:01:58.144 [77/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.144 [78/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.144 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.144 [80/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.145 [81/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.145 [82/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.145 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.145 [84/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.145 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.145 [86/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.145 [87/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:58.145 [88/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.145 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.145 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.145 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.145 [92/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.145 [93/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.145 [94/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:58.145 [95/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.145 [96/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.145 [97/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.145 [98/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:58.145 [99/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:58.145 [100/707] Linking static target lib/librte_metrics.a 00:01:58.145 [101/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.145 [102/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.145 [103/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:58.145 [104/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.145 [105/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:58.145 [106/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.145 [107/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:58.145 [108/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:58.145 [109/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.145 [110/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.145 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.145 [112/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:58.145 [113/707] Linking static target lib/librte_net.a 00:01:58.145 [114/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:58.145 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.145 [116/707] Linking static target lib/librte_cfgfile.a 00:01:58.145 [117/707] Linking static target lib/librte_bitratestats.a 00:01:58.145 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.145 [119/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.434 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.434 [121/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.434 [122/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.434 [123/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.434 [124/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.434 [125/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.434 [126/707] Linking target lib/librte_log.so.24.0 00:01:58.434 [127/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.434 [128/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.434 [129/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.434 [130/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.434 [131/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:58.434 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.434 [133/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.434 [134/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.434 [135/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.434 [136/707] Linking static target lib/librte_timer.a 00:01:58.434 [137/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:58.434 [138/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.434 [139/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.434 [140/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:58.434 [141/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.434 [142/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.434 [143/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:58.434 [144/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:58.434 [145/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.711 [146/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.711 [147/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:58.711 [148/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:58.711 [149/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.711 [150/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.711 [151/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.711 [152/707] Linking target lib/librte_kvargs.so.24.0 00:01:58.711 [153/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:58.711 [154/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.711 [155/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.711 [156/707] Linking static target lib/librte_mempool.a 00:01:58.711 [157/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:58.711 [158/707] Linking static target lib/librte_bbdev.a 00:01:58.711 [159/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:58.711 [160/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.711 [161/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.711 [162/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.711 [163/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.711 [164/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:58.711 [165/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:58.711 [166/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.711 [167/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.711 [168/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:58.711 [169/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:58.711 [170/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:58.711 [171/707] Linking static target lib/librte_compressdev.a 00:01:58.711 [172/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.711 [173/707] Linking static target lib/librte_jobstats.a 00:01:58.711 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:58.711 [175/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.711 [176/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.711 [177/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.711 [178/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:58.711 [179/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:58.711 [180/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:58.711 [181/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.971 [182/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:58.971 [183/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:58.971 [184/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:58.971 [185/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:58.971 [186/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:58.971 [187/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:58.971 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:58.971 [189/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:58.971 [190/707] Linking static target lib/librte_dispatcher.a 00:01:58.971 [191/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:58.971 [192/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.971 [193/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:58.971 [194/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.971 [195/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:58.971 [196/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.971 [197/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.971 [198/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:58.971 [199/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:58.971 [200/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:58.971 [201/707] Linking static target lib/librte_rcu.a 00:01:58.971 [202/707] Linking static target lib/librte_latencystats.a 00:01:58.971 [203/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:58.971 [204/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.971 [205/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:58.971 [206/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:58.971 [207/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:58.971 [208/707] Linking static target lib/librte_gpudev.a 00:01:58.971 [209/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:58.971 [210/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:58.971 [211/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.971 [212/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:58.971 [213/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.971 [214/707] Linking static target lib/librte_telemetry.a 00:01:58.971 [215/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:58.971 [216/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.971 [217/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:58.971 [218/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:58.971 [219/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:58.971 [220/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.971 [221/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:59.232 [222/707] Linking static target lib/librte_dmadev.a 00:01:59.232 [223/707] Linking static target lib/librte_gro.a 00:01:59.232 [224/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:59.232 [225/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:59.232 [226/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.232 [227/707] Linking static target lib/librte_stack.a 00:01:59.232 [228/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:59.232 [229/707] Linking static target lib/librte_regexdev.a 00:01:59.232 [230/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:59.232 [231/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.232 [232/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:59.232 [233/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:59.232 [234/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:59.232 [235/707] Linking static target lib/librte_gso.a 00:01:59.232 [236/707] Linking static target lib/librte_rawdev.a 00:01:59.232 [237/707] Linking static target lib/librte_distributor.a 00:01:59.232 [238/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:59.232 [239/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.232 [240/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:59.232 [241/707] Linking static target lib/librte_eal.a 00:01:59.232 [242/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.232 [243/707] Linking static target lib/librte_mbuf.a 00:01:59.232 [244/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:59.232 [245/707] Linking static target lib/librte_mldev.a 00:01:59.232 [246/707] Linking static target lib/librte_power.a 00:01:59.232 [247/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:59.232 [248/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.232 [249/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:59.232 [250/707] Linking static target lib/librte_ip_frag.a 00:01:59.232 [251/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:59.233 [252/707] Linking static target lib/librte_pcapng.a 00:01:59.233 [253/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:59.233 [254/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.233 [255/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:59.496 [256/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:59.496 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:59.496 [258/707] Linking static target lib/librte_reorder.a 00:01:59.496 [259/707] Linking static target lib/librte_bpf.a 00:01:59.496 [260/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.496 [261/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.496 [262/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.496 [263/707] Linking static target lib/librte_security.a 00:01:59.496 [264/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.496 [265/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:59.496 [266/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:59.496 [267/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.496 [268/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.496 [269/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:59.496 [270/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:59.496 [271/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:59.496 [272/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.496 [273/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.496 [274/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.496 [275/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.496 [276/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.496 [277/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.496 [278/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:59.496 [279/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:59.496 [280/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:59.757 [281/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:59.757 [282/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:59.757 [283/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [284/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [285/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:59.757 [286/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [287/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:59.757 [288/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [289/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:59.757 [290/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.757 [291/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:59.757 [292/707] Linking static target lib/librte_lpm.a 00:01:59.757 [293/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [294/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:59.757 [295/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:59.757 [296/707] Linking static target lib/librte_rib.a 00:01:59.757 [297/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:59.757 [298/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:59.757 [299/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [300/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.757 [301/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [302/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [303/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.757 [304/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:59.757 [305/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.020 [306/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:00.020 [307/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:00.020 [308/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:00.020 [309/707] Linking target lib/librte_telemetry.so.24.0 00:02:00.020 [310/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:00.020 [311/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:00.020 [312/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:00.020 [313/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.020 [314/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:00.020 [315/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:00.020 [316/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:00.020 [317/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.020 [318/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.020 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:00.020 [320/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:00.020 [321/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:00.020 [322/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:00.020 [323/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:00.020 [324/707] Linking static target lib/librte_efd.a 00:02:00.020 [325/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:00.020 [326/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:00.020 [327/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:00.020 [328/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:00.020 [329/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:00.020 [330/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:00.020 [331/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:00.020 [332/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:00.020 [333/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:00.282 [334/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:00.282 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:00.282 [336/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.282 [337/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.282 [338/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:00.282 [339/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:00.282 [340/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:00.282 [341/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.282 [342/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:00.282 [343/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:00.282 [344/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:00.282 [345/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:00.282 [346/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:00.282 [347/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:00.282 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:00.282 [349/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:00.282 [350/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:00.282 [351/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:00.282 [352/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:00.282 [353/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.282 [354/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:00.282 [355/707] Linking static target lib/librte_fib.a 00:02:00.282 [356/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.282 [357/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:00.282 [358/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.542 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:00.542 [360/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:00.542 [361/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:00.542 [362/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.542 [363/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:00.542 [364/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:00.542 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:00.542 [366/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:00.542 [367/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.542 [368/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.542 [369/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.542 [370/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.542 [371/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:00.542 [372/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.542 [373/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.542 [374/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.542 [375/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:00.542 [376/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.542 [377/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:00.542 [378/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:00.542 [379/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:00.804 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:00.804 [381/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:00.804 [382/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:00.804 [383/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.804 [384/707] Linking static target lib/librte_pdump.a 00:02:00.804 [385/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:00.804 [386/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:00.804 [387/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:00.804 [388/707] Linking static target lib/librte_graph.a 00:02:00.804 [389/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.804 [390/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.804 [391/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:00.804 [392/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:00.804 [393/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:00.804 [394/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:00.804 [395/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:00.804 [396/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:00.804 [397/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:00.804 [398/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:00.804 [399/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:00.804 [400/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:00.804 [401/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:00.804 [402/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.804 [403/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:01.067 [404/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:01.067 [405/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.067 [406/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:01.067 [407/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.067 [408/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:01.067 [409/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:01.067 [410/707] Linking static target drivers/librte_bus_vdev.a 00:02:01.067 [411/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:01.067 [412/707] Linking static target lib/librte_sched.a 00:02:01.067 [413/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.067 [414/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:01.068 [415/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:01.068 [416/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:01.068 [417/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:01.068 [418/707] Linking static target lib/librte_table.a 00:02:01.068 [419/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:01.068 [420/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:01.068 [421/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:01.068 [422/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:01.068 [423/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:01.068 [424/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:01.068 [425/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:01.068 [426/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.068 [427/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.068 [428/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:01.068 [429/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:01.068 [430/707] Linking static target lib/librte_cryptodev.a 00:02:01.068 [431/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:01.068 [432/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:01.331 [433/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.331 [434/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.331 [435/707] Linking static target drivers/librte_bus_pci.a 00:02:01.331 [436/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:01.331 [437/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:01.331 [438/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:01.332 [439/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.332 [440/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:01.332 [441/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:01.332 [442/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:01.332 [443/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:01.332 [444/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:01.332 [445/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:01.332 [446/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:01.332 [447/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:01.332 [448/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:01.332 [449/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:01.332 [450/707] Linking static target lib/librte_ipsec.a 00:02:01.332 [451/707] Linking static target lib/librte_member.a 00:02:01.332 [452/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:01.332 [453/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.332 [454/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.332 [455/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:01.332 [456/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.593 [457/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:01.593 [458/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:01.593 [459/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:01.593 [460/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:01.593 [461/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.593 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:01.593 [463/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:01.593 [464/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:01.593 [465/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:01.593 [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:01.593 [467/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:01.593 [468/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:01.593 [469/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:01.593 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:01.593 [471/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:01.593 [472/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:01.593 [473/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:01.593 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:01.593 [475/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:01.593 [476/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:01.593 [477/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.593 [478/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:01.593 [479/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:01.593 [480/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:01.593 [481/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:01.593 [482/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:01.593 [483/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:01.852 [484/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:01.852 [485/707] Linking static target lib/librte_pdcp.a 00:02:01.852 [486/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.852 [487/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:01.852 [488/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:01.852 [489/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:01.852 [490/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.852 [491/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.852 [492/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.852 [493/707] Linking static target lib/librte_hash.a 00:02:01.852 [494/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:01.852 [495/707] Linking static target drivers/librte_mempool_ring.a 00:02:01.852 [496/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:01.852 [497/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.852 [498/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:01.852 [499/707] Linking static target lib/librte_node.a 00:02:01.852 [500/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:01.852 [501/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.852 [502/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:01.852 [503/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:01.852 [504/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.852 [505/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:01.852 [506/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:01.852 [507/707] Linking static target lib/librte_port.a 00:02:01.852 [508/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:01.852 [509/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:01.852 [510/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:01.852 [511/707] Linking static target lib/acl/libavx2_tmp.a 00:02:01.852 [512/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:01.852 [513/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.852 [514/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:01.852 [515/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:01.852 [516/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:01.852 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:02.111 [518/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:02.111 [519/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:02.111 [520/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:02.111 [521/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:02.111 [522/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:02.111 [523/707] Linking static target lib/librte_eventdev.a 00:02:02.111 [524/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:02.111 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:02.111 [526/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:02.111 [527/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:02.111 [528/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:02.111 [529/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:02.111 [530/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.111 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:02.111 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:02.111 [533/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:02.111 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:02.111 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:02.111 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:02.111 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:02.111 [538/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.111 [539/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:02.111 [540/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:02.111 [541/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:02.111 [542/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:02.111 [543/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.369 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:02.369 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:02.369 [546/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:02.369 [547/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:02.369 [548/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:02.369 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:02.369 [550/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:02.369 [551/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:02.369 [552/707] Linking static target lib/librte_acl.a 00:02:02.370 [553/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:02.370 [554/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:02.370 [555/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:02.370 [556/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:02.370 [557/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:02.370 [558/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:02.370 [559/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:02.628 [560/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:02.628 [561/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:02.628 [562/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:02.628 [563/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.628 [564/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:02.628 [565/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:02.628 [566/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.628 [567/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:02.885 [568/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.885 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:02.885 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:02.885 [571/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:03.142 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:03.142 [573/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:03.142 [574/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.142 [575/707] Linking static target lib/librte_ethdev.a 00:02:03.400 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:03.400 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:03.657 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:03.916 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:03.916 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:04.483 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:04.483 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:04.483 [583/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:04.742 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:04.742 [585/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:04.742 [586/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:04.742 [587/707] Linking static target drivers/librte_net_i40e.a 00:02:05.001 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:05.569 [589/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.828 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:05.828 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.396 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:11.664 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.664 [594/707] Linking target lib/librte_eal.so.24.0 00:02:11.664 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:11.664 [596/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.664 [597/707] Linking target lib/librte_meter.so.24.0 00:02:11.664 [598/707] Linking target lib/librte_cfgfile.so.24.0 00:02:11.664 [599/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:11.664 [600/707] Linking target lib/librte_ring.so.24.0 00:02:11.664 [601/707] Linking target lib/librte_pci.so.24.0 00:02:11.664 [602/707] Linking target lib/librte_timer.so.24.0 00:02:11.664 [603/707] Linking target lib/librte_jobstats.so.24.0 00:02:11.664 [604/707] Linking target lib/librte_dmadev.so.24.0 00:02:11.664 [605/707] Linking target lib/librte_acl.so.24.0 00:02:11.664 [606/707] Linking target lib/librte_stack.so.24.0 00:02:11.664 [607/707] Linking target lib/librte_rawdev.so.24.0 00:02:11.664 [608/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:11.664 [609/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:11.664 [610/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:11.664 [611/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:11.665 [612/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:11.665 [613/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:11.665 [614/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:11.665 [615/707] Linking target lib/librte_rcu.so.24.0 00:02:11.665 [616/707] Linking target lib/librte_mempool.so.24.0 00:02:11.665 [617/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:11.665 [618/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:11.665 [619/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:11.665 [620/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:11.665 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:11.665 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:11.665 [623/707] Linking target lib/librte_rib.so.24.0 00:02:11.924 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:11.924 [625/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:11.924 [626/707] Linking target lib/librte_fib.so.24.0 00:02:11.924 [627/707] Linking target lib/librte_distributor.so.24.0 00:02:11.924 [628/707] Linking target lib/librte_mldev.so.24.0 00:02:11.924 [629/707] Linking target lib/librte_net.so.24.0 00:02:11.924 [630/707] Linking target lib/librte_sched.so.24.0 00:02:11.924 [631/707] Linking target lib/librte_cryptodev.so.24.0 00:02:11.924 [632/707] Linking target lib/librte_compressdev.so.24.0 00:02:11.924 [633/707] Linking target lib/librte_bbdev.so.24.0 00:02:11.924 [634/707] Linking target lib/librte_reorder.so.24.0 00:02:11.924 [635/707] Linking target lib/librte_gpudev.so.24.0 00:02:11.924 [636/707] Linking target lib/librte_regexdev.so.24.0 00:02:12.184 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:12.184 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:12.184 [639/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:12.184 [640/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:12.184 [641/707] Linking target lib/librte_security.so.24.0 00:02:12.184 [642/707] Linking target lib/librte_hash.so.24.0 00:02:12.184 [643/707] Linking target lib/librte_cmdline.so.24.0 00:02:12.184 [644/707] Linking target lib/librte_ethdev.so.24.0 00:02:12.184 [645/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:12.184 [646/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:12.184 [647/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:12.184 [648/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:12.443 [649/707] Linking target lib/librte_member.so.24.0 00:02:12.443 [650/707] Linking target lib/librte_lpm.so.24.0 00:02:12.443 [651/707] Linking target lib/librte_efd.so.24.0 00:02:12.443 [652/707] Linking static target lib/librte_pipeline.a 00:02:12.443 [653/707] Linking target lib/librte_ipsec.so.24.0 00:02:12.443 [654/707] Linking target lib/librte_pdcp.so.24.0 00:02:12.443 [655/707] Linking target lib/librte_gso.so.24.0 00:02:12.443 [656/707] Linking target lib/librte_gro.so.24.0 00:02:12.443 [657/707] Linking target lib/librte_ip_frag.so.24.0 00:02:12.443 [658/707] Linking target lib/librte_metrics.so.24.0 00:02:12.443 [659/707] Linking target lib/librte_pcapng.so.24.0 00:02:12.443 [660/707] Linking target lib/librte_bpf.so.24.0 00:02:12.443 [661/707] Linking target lib/librte_power.so.24.0 00:02:12.443 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:12.443 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:12.443 [664/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:12.443 [665/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:12.443 [666/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:12.443 [667/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:12.443 [668/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:12.443 [669/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:12.443 [670/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.443 [671/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:12.443 [672/707] Linking target lib/librte_bitratestats.so.24.0 00:02:12.443 [673/707] Linking target lib/librte_pdump.so.24.0 00:02:12.443 [674/707] Linking static target lib/librte_vhost.a 00:02:12.443 [675/707] Linking target lib/librte_latencystats.so.24.0 00:02:12.443 [676/707] Linking target lib/librte_graph.so.24.0 00:02:12.443 [677/707] Linking target lib/librte_dispatcher.so.24.0 00:02:12.702 [678/707] Linking target lib/librte_port.so.24.0 00:02:12.702 [679/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:12.702 [680/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:12.702 [681/707] Linking target lib/librte_node.so.24.0 00:02:12.702 [682/707] Linking target lib/librte_table.so.24.0 00:02:12.959 [683/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:12.959 [684/707] Linking target app/dpdk-test-cmdline 00:02:12.959 [685/707] Linking target app/dpdk-test-regex 00:02:12.959 [686/707] Linking target app/dpdk-graph 00:02:12.959 [687/707] Linking target app/dpdk-test-gpudev 00:02:12.959 [688/707] Linking target app/dpdk-test-compress-perf 00:02:12.959 [689/707] Linking target app/dpdk-test-fib 00:02:12.959 [690/707] Linking target app/dpdk-test-sad 00:02:12.959 [691/707] Linking target app/dpdk-proc-info 00:02:12.959 [692/707] Linking target app/dpdk-test-mldev 00:02:12.959 [693/707] Linking target app/dpdk-test-flow-perf 00:02:12.959 [694/707] Linking target app/dpdk-test-bbdev 00:02:12.959 [695/707] Linking target app/dpdk-test-pipeline 00:02:12.959 [696/707] Linking target app/dpdk-dumpcap 00:02:12.959 [697/707] Linking target app/dpdk-test-dma-perf 00:02:12.959 [698/707] Linking target app/dpdk-pdump 00:02:12.959 [699/707] Linking target app/dpdk-test-acl 00:02:12.959 [700/707] Linking target app/dpdk-test-crypto-perf 00:02:12.959 [701/707] Linking target app/dpdk-test-security-perf 00:02:12.959 [702/707] Linking target app/dpdk-test-eventdev 00:02:12.959 [703/707] Linking target app/dpdk-testpmd 00:02:14.862 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.862 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:18.150 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.150 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:18.150 05:04:34 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:18.150 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:18.150 [0/1] Installing files. 00:02:18.150 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.150 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.151 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.152 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:18.414 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.415 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.416 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.417 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:18.418 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:18.418 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.418 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.419 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.419 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.419 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.419 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.419 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.419 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.419 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.419 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.419 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.681 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.682 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.683 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.684 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:18.685 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:18.685 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:18.685 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:02:18.685 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:18.685 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:18.685 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:18.685 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:18.685 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:18.685 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:18.685 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:18.685 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:18.685 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:18.685 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:18.685 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:18.685 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:18.685 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:18.685 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:18.685 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:18.685 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:18.685 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:18.685 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:18.685 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:18.685 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:18.685 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:18.685 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:18.685 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:18.685 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:18.685 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:18.685 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:18.685 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:18.685 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:18.685 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:18.685 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:18.685 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:18.685 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:18.685 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:18.685 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:18.685 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:18.685 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:18.685 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:18.685 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:18.685 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:18.685 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:18.685 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:18.685 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:18.685 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:18.685 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:18.685 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:18.685 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:18.685 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:18.686 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:18.686 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:18.686 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:18.686 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:18.686 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:18.686 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:18.686 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:18.686 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:18.686 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:18.686 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:18.686 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:18.686 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:18.686 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:18.686 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:18.686 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:18.686 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:18.686 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:18.686 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:18.686 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:18.686 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:18.686 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:18.686 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:18.686 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:18.686 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:18.686 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:18.686 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:18.686 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:18.686 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:18.686 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:18.686 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:18.686 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:18.686 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:18.686 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:18.686 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:18.686 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:18.686 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:18.686 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:18.686 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:18.686 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:18.686 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:18.686 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:18.686 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:18.686 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:18.686 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:18.686 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:18.686 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:18.686 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:18.686 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:18.686 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:18.686 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:18.686 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:18.686 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:18.686 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:18.686 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:18.686 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:18.686 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:18.686 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:18.686 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:18.686 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:18.686 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:18.686 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:18.686 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:18.686 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:18.686 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:18.686 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:18.686 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:18.686 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:18.686 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:18.686 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:18.686 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:18.686 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:18.686 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:18.686 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:18.686 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:18.686 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:18.686 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:18.686 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:18.686 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:18.686 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:18.686 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:18.686 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:18.686 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:18.687 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:18.687 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:18.687 05:04:35 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:18.687 05:04:35 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:18.687 05:04:35 -- common/autobuild_common.sh@203 -- $ cat 00:02:18.687 05:04:35 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:18.687 00:02:18.687 real 0m28.041s 00:02:18.687 user 8m5.922s 00:02:18.687 sys 2m43.610s 00:02:18.687 05:04:35 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:18.687 05:04:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.687 ************************************ 00:02:18.687 END TEST build_native_dpdk 00:02:18.687 ************************************ 00:02:18.945 05:04:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.945 05:04:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.945 05:04:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.945 05:04:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.945 05:04:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.945 05:04:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.945 05:04:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.945 05:04:35 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:18.945 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:19.203 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:19.203 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:19.203 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:19.461 Using 'verbs' RDMA provider 00:02:32.232 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:47.107 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:47.107 Creating mk/config.mk...done. 00:02:47.107 Creating mk/cc.flags.mk...done. 00:02:47.107 Type 'make' to build. 00:02:47.107 05:05:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:47.107 05:05:02 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:47.107 05:05:02 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:47.107 05:05:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.107 ************************************ 00:02:47.107 START TEST make 00:02:47.107 ************************************ 00:02:47.107 05:05:02 -- common/autotest_common.sh@1114 -- $ make -j112 00:02:47.107 make[1]: Nothing to be done for 'all'. 00:02:57.084 CC lib/ut_mock/mock.o 00:02:57.084 CC lib/log/log_deprecated.o 00:02:57.084 CC lib/log/log.o 00:02:57.084 CC lib/log/log_flags.o 00:02:57.084 CC lib/ut/ut.o 00:02:57.084 LIB libspdk_ut_mock.a 00:02:57.084 LIB libspdk_log.a 00:02:57.084 LIB libspdk_ut.a 00:02:57.084 SO libspdk_ut_mock.so.5.0 00:02:57.084 SO libspdk_log.so.6.1 00:02:57.084 SO libspdk_ut.so.1.0 00:02:57.084 SYMLINK libspdk_ut_mock.so 00:02:57.084 SYMLINK libspdk_log.so 00:02:57.084 SYMLINK libspdk_ut.so 00:02:57.084 CXX lib/trace_parser/trace.o 00:02:57.084 CC lib/dma/dma.o 00:02:57.084 CC lib/util/base64.o 00:02:57.084 CC lib/ioat/ioat.o 00:02:57.084 CC lib/util/bit_array.o 00:02:57.084 CC lib/util/crc32.o 00:02:57.084 CC lib/util/cpuset.o 00:02:57.084 CC lib/util/crc16.o 00:02:57.084 CC lib/util/crc32c.o 00:02:57.084 CC lib/util/crc32_ieee.o 00:02:57.084 CC lib/util/crc64.o 00:02:57.084 CC lib/util/dif.o 00:02:57.084 CC lib/util/fd.o 00:02:57.084 CC lib/util/file.o 00:02:57.084 CC lib/util/hexlify.o 00:02:57.084 CC lib/util/iov.o 00:02:57.084 CC lib/util/math.o 00:02:57.084 CC lib/util/string.o 00:02:57.084 CC lib/util/pipe.o 00:02:57.084 CC lib/util/strerror_tls.o 00:02:57.084 CC lib/util/uuid.o 00:02:57.084 CC lib/util/fd_group.o 00:02:57.084 CC lib/util/xor.o 00:02:57.084 CC lib/util/zipf.o 00:02:57.084 CC lib/vfio_user/host/vfio_user_pci.o 00:02:57.084 CC lib/vfio_user/host/vfio_user.o 00:02:57.084 LIB libspdk_dma.a 00:02:57.084 SO libspdk_dma.so.3.0 00:02:57.084 LIB libspdk_ioat.a 00:02:57.084 SYMLINK libspdk_dma.so 00:02:57.084 SO libspdk_ioat.so.6.0 00:02:57.084 LIB libspdk_vfio_user.a 00:02:57.084 SYMLINK libspdk_ioat.so 00:02:57.084 SO libspdk_vfio_user.so.4.0 00:02:57.084 LIB libspdk_util.a 00:02:57.084 SYMLINK libspdk_vfio_user.so 00:02:57.084 SO libspdk_util.so.8.0 00:02:57.084 SYMLINK libspdk_util.so 00:02:57.084 LIB libspdk_trace_parser.a 00:02:57.084 SO libspdk_trace_parser.so.4.0 00:02:57.084 SYMLINK libspdk_trace_parser.so 00:02:57.084 CC lib/env_dpdk/memory.o 00:02:57.084 CC lib/env_dpdk/env.o 00:02:57.085 CC lib/vmd/vmd.o 00:02:57.085 CC lib/env_dpdk/pci.o 00:02:57.085 CC lib/env_dpdk/pci_ioat.o 00:02:57.085 CC lib/env_dpdk/init.o 00:02:57.085 CC lib/vmd/led.o 00:02:57.085 CC lib/env_dpdk/threads.o 00:02:57.085 CC lib/env_dpdk/pci_virtio.o 00:02:57.085 CC lib/env_dpdk/pci_vmd.o 00:02:57.085 CC lib/env_dpdk/pci_idxd.o 00:02:57.085 CC lib/env_dpdk/pci_event.o 00:02:57.085 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.085 CC lib/env_dpdk/sigbus_handler.o 00:02:57.085 CC lib/env_dpdk/pci_dpdk.o 00:02:57.085 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.085 CC lib/idxd/idxd.o 00:02:57.085 CC lib/idxd/idxd_user.o 00:02:57.085 CC lib/idxd/idxd_kernel.o 00:02:57.085 CC lib/conf/conf.o 00:02:57.085 CC lib/json/json_parse.o 00:02:57.085 CC lib/rdma/common.o 00:02:57.085 CC lib/json/json_util.o 00:02:57.085 CC lib/rdma/rdma_verbs.o 00:02:57.085 CC lib/json/json_write.o 00:02:57.343 LIB libspdk_conf.a 00:02:57.343 SO libspdk_conf.so.5.0 00:02:57.343 LIB libspdk_rdma.a 00:02:57.343 LIB libspdk_json.a 00:02:57.343 SO libspdk_rdma.so.5.0 00:02:57.343 SYMLINK libspdk_conf.so 00:02:57.343 SO libspdk_json.so.5.1 00:02:57.343 SYMLINK libspdk_rdma.so 00:02:57.343 SYMLINK libspdk_json.so 00:02:57.343 LIB libspdk_idxd.a 00:02:57.602 SO libspdk_idxd.so.11.0 00:02:57.602 LIB libspdk_vmd.a 00:02:57.602 SO libspdk_vmd.so.5.0 00:02:57.602 SYMLINK libspdk_idxd.so 00:02:57.602 SYMLINK libspdk_vmd.so 00:02:57.602 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.602 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.602 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.602 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.908 LIB libspdk_jsonrpc.a 00:02:57.908 SO libspdk_jsonrpc.so.5.1 00:02:57.908 SYMLINK libspdk_jsonrpc.so 00:02:58.223 LIB libspdk_env_dpdk.a 00:02:58.223 SO libspdk_env_dpdk.so.13.0 00:02:58.223 CC lib/rpc/rpc.o 00:02:58.223 SYMLINK libspdk_env_dpdk.so 00:02:58.482 LIB libspdk_rpc.a 00:02:58.482 SO libspdk_rpc.so.5.0 00:02:58.482 SYMLINK libspdk_rpc.so 00:02:58.740 CC lib/trace/trace_flags.o 00:02:58.740 CC lib/trace/trace.o 00:02:58.740 CC lib/trace/trace_rpc.o 00:02:58.740 CC lib/notify/notify.o 00:02:58.740 CC lib/notify/notify_rpc.o 00:02:58.740 CC lib/sock/sock.o 00:02:58.740 CC lib/sock/sock_rpc.o 00:02:58.740 LIB libspdk_notify.a 00:02:58.998 LIB libspdk_trace.a 00:02:58.998 SO libspdk_notify.so.5.0 00:02:58.998 SO libspdk_trace.so.9.0 00:02:58.998 SYMLINK libspdk_notify.so 00:02:58.998 SYMLINK libspdk_trace.so 00:02:58.998 LIB libspdk_sock.a 00:02:58.998 SO libspdk_sock.so.8.0 00:02:58.998 SYMLINK libspdk_sock.so 00:02:59.255 CC lib/thread/thread.o 00:02:59.255 CC lib/thread/iobuf.o 00:02:59.255 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.255 CC lib/nvme/nvme_ctrlr.o 00:02:59.255 CC lib/nvme/nvme_fabric.o 00:02:59.255 CC lib/nvme/nvme_ns_cmd.o 00:02:59.256 CC lib/nvme/nvme_ns.o 00:02:59.256 CC lib/nvme/nvme_pcie_common.o 00:02:59.256 CC lib/nvme/nvme_pcie.o 00:02:59.256 CC lib/nvme/nvme_qpair.o 00:02:59.256 CC lib/nvme/nvme_transport.o 00:02:59.256 CC lib/nvme/nvme.o 00:02:59.256 CC lib/nvme/nvme_quirks.o 00:02:59.256 CC lib/nvme/nvme_discovery.o 00:02:59.256 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.256 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.256 CC lib/nvme/nvme_tcp.o 00:02:59.256 CC lib/nvme/nvme_opal.o 00:02:59.256 CC lib/nvme/nvme_io_msg.o 00:02:59.256 CC lib/nvme/nvme_poll_group.o 00:02:59.256 CC lib/nvme/nvme_zns.o 00:02:59.256 CC lib/nvme/nvme_cuse.o 00:02:59.256 CC lib/nvme/nvme_vfio_user.o 00:02:59.256 CC lib/nvme/nvme_rdma.o 00:03:00.628 LIB libspdk_thread.a 00:03:00.628 SO libspdk_thread.so.9.0 00:03:00.628 SYMLINK libspdk_thread.so 00:03:00.628 CC lib/accel/accel.o 00:03:00.628 CC lib/accel/accel_rpc.o 00:03:00.628 CC lib/accel/accel_sw.o 00:03:00.628 CC lib/init/json_config.o 00:03:00.628 CC lib/init/subsystem.o 00:03:00.628 CC lib/init/subsystem_rpc.o 00:03:00.628 CC lib/init/rpc.o 00:03:00.628 CC lib/virtio/virtio.o 00:03:00.628 CC lib/virtio/virtio_pci.o 00:03:00.628 CC lib/virtio/virtio_vhost_user.o 00:03:00.628 CC lib/blob/blobstore.o 00:03:00.628 CC lib/virtio/virtio_vfio_user.o 00:03:00.628 CC lib/blob/request.o 00:03:00.628 CC lib/blob/zeroes.o 00:03:00.628 CC lib/blob/blob_bs_dev.o 00:03:00.885 LIB libspdk_nvme.a 00:03:00.886 LIB libspdk_init.a 00:03:00.886 SO libspdk_init.so.4.0 00:03:00.886 LIB libspdk_virtio.a 00:03:00.886 SO libspdk_nvme.so.12.0 00:03:00.886 SYMLINK libspdk_init.so 00:03:00.886 SO libspdk_virtio.so.6.0 00:03:01.142 SYMLINK libspdk_virtio.so 00:03:01.142 SYMLINK libspdk_nvme.so 00:03:01.142 CC lib/event/app.o 00:03:01.142 CC lib/event/reactor.o 00:03:01.142 CC lib/event/scheduler_static.o 00:03:01.142 CC lib/event/log_rpc.o 00:03:01.143 CC lib/event/app_rpc.o 00:03:01.400 LIB libspdk_accel.a 00:03:01.400 SO libspdk_accel.so.14.0 00:03:01.400 SYMLINK libspdk_accel.so 00:03:01.659 LIB libspdk_event.a 00:03:01.659 SO libspdk_event.so.12.0 00:03:01.659 SYMLINK libspdk_event.so 00:03:01.659 CC lib/bdev/bdev.o 00:03:01.659 CC lib/bdev/bdev_rpc.o 00:03:01.659 CC lib/bdev/bdev_zone.o 00:03:01.659 CC lib/bdev/part.o 00:03:01.659 CC lib/bdev/scsi_nvme.o 00:03:02.594 LIB libspdk_blob.a 00:03:02.595 SO libspdk_blob.so.10.1 00:03:02.595 SYMLINK libspdk_blob.so 00:03:02.857 CC lib/lvol/lvol.o 00:03:02.857 CC lib/blobfs/blobfs.o 00:03:02.857 CC lib/blobfs/tree.o 00:03:03.429 LIB libspdk_blobfs.a 00:03:03.429 LIB libspdk_bdev.a 00:03:03.429 SO libspdk_blobfs.so.9.0 00:03:03.687 SO libspdk_bdev.so.14.0 00:03:03.687 LIB libspdk_lvol.a 00:03:03.687 SO libspdk_lvol.so.9.1 00:03:03.687 SYMLINK libspdk_blobfs.so 00:03:03.687 SYMLINK libspdk_bdev.so 00:03:03.687 SYMLINK libspdk_lvol.so 00:03:03.947 CC lib/nbd/nbd_rpc.o 00:03:03.947 CC lib/nbd/nbd.o 00:03:03.947 CC lib/ublk/ublk.o 00:03:03.947 CC lib/ublk/ublk_rpc.o 00:03:03.947 CC lib/scsi/port.o 00:03:03.947 CC lib/scsi/dev.o 00:03:03.947 CC lib/ftl/ftl_core.o 00:03:03.947 CC lib/nvmf/ctrlr.o 00:03:03.947 CC lib/scsi/lun.o 00:03:03.947 CC lib/scsi/scsi_bdev.o 00:03:03.947 CC lib/nvmf/ctrlr_discovery.o 00:03:03.947 CC lib/scsi/scsi_pr.o 00:03:03.947 CC lib/scsi/scsi.o 00:03:03.947 CC lib/nvmf/ctrlr_bdev.o 00:03:03.947 CC lib/nvmf/subsystem.o 00:03:03.947 CC lib/scsi/scsi_rpc.o 00:03:03.947 CC lib/ftl/ftl_init.o 00:03:03.947 CC lib/nvmf/nvmf_rpc.o 00:03:03.947 CC lib/nvmf/nvmf.o 00:03:03.947 CC lib/ftl/ftl_layout.o 00:03:03.947 CC lib/ftl/ftl_debug.o 00:03:03.947 CC lib/scsi/task.o 00:03:03.947 CC lib/ftl/ftl_io.o 00:03:03.947 CC lib/nvmf/tcp.o 00:03:03.947 CC lib/nvmf/transport.o 00:03:03.947 CC lib/ftl/ftl_sb.o 00:03:03.947 CC lib/nvmf/rdma.o 00:03:03.947 CC lib/ftl/ftl_nv_cache.o 00:03:03.947 CC lib/ftl/ftl_l2p.o 00:03:03.947 CC lib/ftl/ftl_l2p_flat.o 00:03:03.947 CC lib/ftl/ftl_band.o 00:03:03.947 CC lib/ftl/ftl_band_ops.o 00:03:03.947 CC lib/ftl/ftl_writer.o 00:03:03.947 CC lib/ftl/ftl_rq.o 00:03:03.947 CC lib/ftl/ftl_reloc.o 00:03:03.947 CC lib/ftl/ftl_l2p_cache.o 00:03:03.947 CC lib/ftl/ftl_p2l.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.947 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.947 CC lib/ftl/utils/ftl_conf.o 00:03:03.947 CC lib/ftl/utils/ftl_md.o 00:03:03.947 CC lib/ftl/utils/ftl_mempool.o 00:03:03.947 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.947 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.947 CC lib/ftl/utils/ftl_property.o 00:03:03.947 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.947 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.947 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.947 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.947 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.947 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.947 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.947 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.947 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.947 CC lib/ftl/base/ftl_base_dev.o 00:03:03.947 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.947 CC lib/ftl/ftl_trace.o 00:03:04.513 LIB libspdk_nbd.a 00:03:04.513 SO libspdk_nbd.so.6.0 00:03:04.513 SYMLINK libspdk_nbd.so 00:03:04.513 LIB libspdk_ublk.a 00:03:04.513 SO libspdk_ublk.so.2.0 00:03:04.513 LIB libspdk_scsi.a 00:03:04.513 SYMLINK libspdk_ublk.so 00:03:04.513 SO libspdk_scsi.so.8.0 00:03:04.772 SYMLINK libspdk_scsi.so 00:03:04.772 LIB libspdk_ftl.a 00:03:04.772 SO libspdk_ftl.so.8.0 00:03:05.030 CC lib/iscsi/conn.o 00:03:05.030 CC lib/iscsi/init_grp.o 00:03:05.030 CC lib/iscsi/iscsi.o 00:03:05.030 CC lib/iscsi/param.o 00:03:05.030 CC lib/iscsi/md5.o 00:03:05.030 CC lib/iscsi/portal_grp.o 00:03:05.030 CC lib/iscsi/tgt_node.o 00:03:05.030 CC lib/iscsi/task.o 00:03:05.030 CC lib/iscsi/iscsi_rpc.o 00:03:05.030 CC lib/iscsi/iscsi_subsystem.o 00:03:05.030 CC lib/vhost/vhost.o 00:03:05.030 CC lib/vhost/vhost_rpc.o 00:03:05.030 CC lib/vhost/vhost_scsi.o 00:03:05.030 CC lib/vhost/vhost_blk.o 00:03:05.030 CC lib/vhost/rte_vhost_user.o 00:03:05.030 SYMLINK libspdk_ftl.so 00:03:05.598 LIB libspdk_nvmf.a 00:03:05.598 SO libspdk_nvmf.so.17.0 00:03:05.598 LIB libspdk_vhost.a 00:03:05.856 SYMLINK libspdk_nvmf.so 00:03:05.856 SO libspdk_vhost.so.7.1 00:03:05.856 SYMLINK libspdk_vhost.so 00:03:05.856 LIB libspdk_iscsi.a 00:03:05.856 SO libspdk_iscsi.so.7.0 00:03:06.114 SYMLINK libspdk_iscsi.so 00:03:06.372 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.630 CC module/blob/bdev/blob_bdev.o 00:03:06.630 CC module/accel/iaa/accel_iaa.o 00:03:06.630 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.630 CC module/accel/error/accel_error.o 00:03:06.630 CC module/accel/error/accel_error_rpc.o 00:03:06.630 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.630 CC module/accel/dsa/accel_dsa.o 00:03:06.630 CC module/scheduler/gscheduler/gscheduler.o 00:03:06.630 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.630 CC module/accel/ioat/accel_ioat.o 00:03:06.630 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.630 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.630 LIB libspdk_env_dpdk_rpc.a 00:03:06.630 CC module/sock/posix/posix.o 00:03:06.630 SO libspdk_env_dpdk_rpc.so.5.0 00:03:06.630 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.630 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.630 LIB libspdk_scheduler_gscheduler.a 00:03:06.630 LIB libspdk_accel_iaa.a 00:03:06.630 LIB libspdk_accel_error.a 00:03:06.630 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:06.630 LIB libspdk_accel_ioat.a 00:03:06.888 LIB libspdk_scheduler_dynamic.a 00:03:06.888 SO libspdk_scheduler_gscheduler.so.3.0 00:03:06.888 SO libspdk_scheduler_dynamic.so.3.0 00:03:06.888 SO libspdk_accel_ioat.so.5.0 00:03:06.888 SO libspdk_accel_error.so.1.0 00:03:06.888 SO libspdk_accel_iaa.so.2.0 00:03:06.888 LIB libspdk_accel_dsa.a 00:03:06.888 LIB libspdk_blob_bdev.a 00:03:06.888 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.888 SO libspdk_blob_bdev.so.10.1 00:03:06.888 SO libspdk_accel_dsa.so.4.0 00:03:06.888 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.888 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.888 SYMLINK libspdk_accel_iaa.so 00:03:06.888 SYMLINK libspdk_accel_error.so 00:03:06.888 SYMLINK libspdk_accel_ioat.so 00:03:06.888 SYMLINK libspdk_blob_bdev.so 00:03:06.888 SYMLINK libspdk_accel_dsa.so 00:03:07.146 LIB libspdk_sock_posix.a 00:03:07.146 SO libspdk_sock_posix.so.5.0 00:03:07.146 CC module/bdev/delay/vbdev_delay.o 00:03:07.146 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.146 CC module/bdev/malloc/bdev_malloc.o 00:03:07.146 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.146 CC module/bdev/gpt/gpt.o 00:03:07.146 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.146 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.146 CC module/bdev/nvme/bdev_nvme.o 00:03:07.146 CC module/bdev/error/vbdev_error.o 00:03:07.146 CC module/bdev/nvme/nvme_rpc.o 00:03:07.146 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.146 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.146 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.146 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.146 CC module/bdev/ftl/bdev_ftl.o 00:03:07.146 CC module/bdev/split/vbdev_split.o 00:03:07.146 CC module/bdev/nvme/vbdev_opal.o 00:03:07.146 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.146 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.146 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.146 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.146 CC module/bdev/raid/bdev_raid.o 00:03:07.146 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.146 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.146 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.146 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.146 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.146 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.146 CC module/bdev/raid/raid0.o 00:03:07.146 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.146 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.146 CC module/bdev/raid/raid1.o 00:03:07.146 CC module/bdev/raid/concat.o 00:03:07.146 CC module/bdev/null/bdev_null.o 00:03:07.146 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.146 CC module/bdev/null/bdev_null_rpc.o 00:03:07.146 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.146 CC module/bdev/aio/bdev_aio.o 00:03:07.146 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.146 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.146 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.146 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.404 SYMLINK libspdk_sock_posix.so 00:03:07.404 LIB libspdk_blobfs_bdev.a 00:03:07.404 LIB libspdk_bdev_split.a 00:03:07.404 SO libspdk_blobfs_bdev.so.5.0 00:03:07.662 LIB libspdk_bdev_error.a 00:03:07.662 LIB libspdk_bdev_gpt.a 00:03:07.662 LIB libspdk_bdev_null.a 00:03:07.662 SO libspdk_bdev_split.so.5.0 00:03:07.662 SO libspdk_bdev_error.so.5.0 00:03:07.662 SO libspdk_bdev_gpt.so.5.0 00:03:07.662 LIB libspdk_bdev_passthru.a 00:03:07.662 SO libspdk_bdev_null.so.5.0 00:03:07.662 LIB libspdk_bdev_ftl.a 00:03:07.662 LIB libspdk_bdev_aio.a 00:03:07.662 SYMLINK libspdk_blobfs_bdev.so 00:03:07.662 LIB libspdk_bdev_delay.a 00:03:07.662 LIB libspdk_bdev_malloc.a 00:03:07.662 SYMLINK libspdk_bdev_split.so 00:03:07.662 SO libspdk_bdev_ftl.so.5.0 00:03:07.662 LIB libspdk_bdev_iscsi.a 00:03:07.662 SO libspdk_bdev_passthru.so.5.0 00:03:07.662 LIB libspdk_bdev_zone_block.a 00:03:07.662 SO libspdk_bdev_aio.so.5.0 00:03:07.662 SYMLINK libspdk_bdev_gpt.so 00:03:07.662 SYMLINK libspdk_bdev_error.so 00:03:07.662 SO libspdk_bdev_malloc.so.5.0 00:03:07.662 SO libspdk_bdev_delay.so.5.0 00:03:07.662 SYMLINK libspdk_bdev_null.so 00:03:07.662 SO libspdk_bdev_iscsi.so.5.0 00:03:07.662 SO libspdk_bdev_zone_block.so.5.0 00:03:07.662 SYMLINK libspdk_bdev_ftl.so 00:03:07.662 SYMLINK libspdk_bdev_aio.so 00:03:07.662 SYMLINK libspdk_bdev_delay.so 00:03:07.662 SYMLINK libspdk_bdev_passthru.so 00:03:07.662 SYMLINK libspdk_bdev_malloc.so 00:03:07.662 LIB libspdk_bdev_lvol.a 00:03:07.662 SYMLINK libspdk_bdev_zone_block.so 00:03:07.662 SYMLINK libspdk_bdev_iscsi.so 00:03:07.662 LIB libspdk_bdev_virtio.a 00:03:07.662 SO libspdk_bdev_lvol.so.5.0 00:03:07.662 SO libspdk_bdev_virtio.so.5.0 00:03:07.920 SYMLINK libspdk_bdev_lvol.so 00:03:07.920 SYMLINK libspdk_bdev_virtio.so 00:03:07.920 LIB libspdk_bdev_raid.a 00:03:07.920 SO libspdk_bdev_raid.so.5.0 00:03:08.179 SYMLINK libspdk_bdev_raid.so 00:03:08.745 LIB libspdk_bdev_nvme.a 00:03:08.745 SO libspdk_bdev_nvme.so.6.0 00:03:09.004 SYMLINK libspdk_bdev_nvme.so 00:03:09.571 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.571 CC module/event/subsystems/sock/sock.o 00:03:09.571 CC module/event/subsystems/vmd/vmd.o 00:03:09.571 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.571 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.571 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.571 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.571 LIB libspdk_event_sock.a 00:03:09.571 LIB libspdk_event_scheduler.a 00:03:09.571 LIB libspdk_event_vmd.a 00:03:09.571 SO libspdk_event_sock.so.4.0 00:03:09.571 LIB libspdk_event_vhost_blk.a 00:03:09.571 LIB libspdk_event_iobuf.a 00:03:09.571 SO libspdk_event_scheduler.so.3.0 00:03:09.571 SO libspdk_event_vmd.so.5.0 00:03:09.571 SO libspdk_event_vhost_blk.so.2.0 00:03:09.571 SO libspdk_event_iobuf.so.2.0 00:03:09.571 SYMLINK libspdk_event_sock.so 00:03:09.571 SYMLINK libspdk_event_scheduler.so 00:03:09.571 SYMLINK libspdk_event_vmd.so 00:03:09.571 SYMLINK libspdk_event_vhost_blk.so 00:03:09.571 SYMLINK libspdk_event_iobuf.so 00:03:09.830 CC module/event/subsystems/accel/accel.o 00:03:10.088 LIB libspdk_event_accel.a 00:03:10.088 SO libspdk_event_accel.so.5.0 00:03:10.088 SYMLINK libspdk_event_accel.so 00:03:10.348 CC module/event/subsystems/bdev/bdev.o 00:03:10.607 LIB libspdk_event_bdev.a 00:03:10.607 SO libspdk_event_bdev.so.5.0 00:03:10.607 SYMLINK libspdk_event_bdev.so 00:03:10.866 CC module/event/subsystems/scsi/scsi.o 00:03:10.866 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.866 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.866 CC module/event/subsystems/ublk/ublk.o 00:03:10.866 CC module/event/subsystems/nbd/nbd.o 00:03:11.125 LIB libspdk_event_scsi.a 00:03:11.125 SO libspdk_event_scsi.so.5.0 00:03:11.125 LIB libspdk_event_ublk.a 00:03:11.125 LIB libspdk_event_nbd.a 00:03:11.125 SO libspdk_event_ublk.so.2.0 00:03:11.125 SO libspdk_event_nbd.so.5.0 00:03:11.125 LIB libspdk_event_nvmf.a 00:03:11.125 SYMLINK libspdk_event_scsi.so 00:03:11.125 SO libspdk_event_nvmf.so.5.0 00:03:11.125 SYMLINK libspdk_event_ublk.so 00:03:11.125 SYMLINK libspdk_event_nbd.so 00:03:11.125 SYMLINK libspdk_event_nvmf.so 00:03:11.385 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.385 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.385 LIB libspdk_event_vhost_scsi.a 00:03:11.385 LIB libspdk_event_iscsi.a 00:03:11.644 SO libspdk_event_vhost_scsi.so.2.0 00:03:11.644 SO libspdk_event_iscsi.so.5.0 00:03:11.644 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.644 SYMLINK libspdk_event_iscsi.so 00:03:11.644 SO libspdk.so.5.0 00:03:11.644 SYMLINK libspdk.so 00:03:11.904 CXX app/trace/trace.o 00:03:11.904 CC app/spdk_lspci/spdk_lspci.o 00:03:11.904 CC app/spdk_top/spdk_top.o 00:03:11.904 CC app/spdk_nvme_perf/perf.o 00:03:11.904 CC app/spdk_nvme_discover/discovery_aer.o 00:03:11.904 TEST_HEADER include/spdk/accel.h 00:03:11.904 TEST_HEADER include/spdk/accel_module.h 00:03:11.904 TEST_HEADER include/spdk/assert.h 00:03:11.904 TEST_HEADER include/spdk/barrier.h 00:03:11.904 TEST_HEADER include/spdk/base64.h 00:03:11.904 TEST_HEADER include/spdk/bdev_module.h 00:03:11.904 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.904 TEST_HEADER include/spdk/bdev.h 00:03:11.904 CC app/spdk_nvme_identify/identify.o 00:03:11.904 TEST_HEADER include/spdk/bit_array.h 00:03:11.904 CC app/trace_record/trace_record.o 00:03:11.904 TEST_HEADER include/spdk/bit_pool.h 00:03:11.904 CC app/nvmf_tgt/nvmf_main.o 00:03:11.904 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.904 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:11.904 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.904 TEST_HEADER include/spdk/blobfs.h 00:03:11.904 TEST_HEADER include/spdk/blob.h 00:03:11.904 CC test/rpc_client/rpc_client_test.o 00:03:11.904 TEST_HEADER include/spdk/conf.h 00:03:11.904 TEST_HEADER include/spdk/config.h 00:03:11.904 TEST_HEADER include/spdk/crc16.h 00:03:11.904 TEST_HEADER include/spdk/cpuset.h 00:03:11.904 TEST_HEADER include/spdk/crc32.h 00:03:11.904 TEST_HEADER include/spdk/dif.h 00:03:11.904 TEST_HEADER include/spdk/crc64.h 00:03:11.904 TEST_HEADER include/spdk/dma.h 00:03:11.904 TEST_HEADER include/spdk/endian.h 00:03:11.904 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.904 TEST_HEADER include/spdk/env.h 00:03:11.904 TEST_HEADER include/spdk/event.h 00:03:11.904 CC app/vhost/vhost.o 00:03:11.904 TEST_HEADER include/spdk/fd_group.h 00:03:11.905 TEST_HEADER include/spdk/fd.h 00:03:11.905 TEST_HEADER include/spdk/ftl.h 00:03:11.905 TEST_HEADER include/spdk/file.h 00:03:11.905 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.905 TEST_HEADER include/spdk/histogram_data.h 00:03:12.169 TEST_HEADER include/spdk/hexlify.h 00:03:12.169 TEST_HEADER include/spdk/idxd.h 00:03:12.169 TEST_HEADER include/spdk/idxd_spec.h 00:03:12.169 TEST_HEADER include/spdk/init.h 00:03:12.169 TEST_HEADER include/spdk/ioat.h 00:03:12.169 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.169 TEST_HEADER include/spdk/json.h 00:03:12.169 TEST_HEADER include/spdk/ioat_spec.h 00:03:12.169 TEST_HEADER include/spdk/iscsi_spec.h 00:03:12.169 TEST_HEADER include/spdk/jsonrpc.h 00:03:12.169 TEST_HEADER include/spdk/likely.h 00:03:12.169 TEST_HEADER include/spdk/log.h 00:03:12.169 TEST_HEADER include/spdk/memory.h 00:03:12.169 TEST_HEADER include/spdk/lvol.h 00:03:12.169 CC app/spdk_dd/spdk_dd.o 00:03:12.169 TEST_HEADER include/spdk/mmio.h 00:03:12.169 TEST_HEADER include/spdk/nbd.h 00:03:12.169 TEST_HEADER include/spdk/notify.h 00:03:12.169 TEST_HEADER include/spdk/nvme.h 00:03:12.169 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:12.169 TEST_HEADER include/spdk/nvme_intel.h 00:03:12.169 CC app/spdk_tgt/spdk_tgt.o 00:03:12.169 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:12.169 TEST_HEADER include/spdk/nvme_spec.h 00:03:12.169 TEST_HEADER include/spdk/nvme_zns.h 00:03:12.169 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:12.169 TEST_HEADER include/spdk/nvmf.h 00:03:12.169 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:12.169 TEST_HEADER include/spdk/nvmf_spec.h 00:03:12.169 TEST_HEADER include/spdk/nvmf_transport.h 00:03:12.169 TEST_HEADER include/spdk/opal_spec.h 00:03:12.169 TEST_HEADER include/spdk/opal.h 00:03:12.169 TEST_HEADER include/spdk/pci_ids.h 00:03:12.169 TEST_HEADER include/spdk/pipe.h 00:03:12.169 TEST_HEADER include/spdk/queue.h 00:03:12.169 TEST_HEADER include/spdk/reduce.h 00:03:12.169 TEST_HEADER include/spdk/rpc.h 00:03:12.169 TEST_HEADER include/spdk/scheduler.h 00:03:12.169 TEST_HEADER include/spdk/scsi.h 00:03:12.169 TEST_HEADER include/spdk/scsi_spec.h 00:03:12.169 TEST_HEADER include/spdk/sock.h 00:03:12.169 TEST_HEADER include/spdk/stdinc.h 00:03:12.169 TEST_HEADER include/spdk/string.h 00:03:12.169 TEST_HEADER include/spdk/trace.h 00:03:12.169 TEST_HEADER include/spdk/thread.h 00:03:12.169 TEST_HEADER include/spdk/trace_parser.h 00:03:12.169 TEST_HEADER include/spdk/tree.h 00:03:12.169 TEST_HEADER include/spdk/util.h 00:03:12.169 TEST_HEADER include/spdk/uuid.h 00:03:12.169 TEST_HEADER include/spdk/ublk.h 00:03:12.169 TEST_HEADER include/spdk/version.h 00:03:12.169 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.169 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.169 TEST_HEADER include/spdk/vhost.h 00:03:12.169 TEST_HEADER include/spdk/vmd.h 00:03:12.169 TEST_HEADER include/spdk/zipf.h 00:03:12.169 TEST_HEADER include/spdk/xor.h 00:03:12.169 CXX test/cpp_headers/accel.o 00:03:12.170 CXX test/cpp_headers/assert.o 00:03:12.170 CXX test/cpp_headers/accel_module.o 00:03:12.170 CC examples/accel/perf/accel_perf.o 00:03:12.170 CXX test/cpp_headers/barrier.o 00:03:12.170 CC examples/sock/hello_world/hello_sock.o 00:03:12.170 CXX test/cpp_headers/bdev.o 00:03:12.170 CXX test/cpp_headers/base64.o 00:03:12.170 CXX test/cpp_headers/bdev_module.o 00:03:12.170 CC examples/nvme/reconnect/reconnect.o 00:03:12.170 CC examples/ioat/verify/verify.o 00:03:12.170 CXX test/cpp_headers/bdev_zone.o 00:03:12.170 CXX test/cpp_headers/bit_array.o 00:03:12.170 CXX test/cpp_headers/bit_pool.o 00:03:12.170 CXX test/cpp_headers/blob_bdev.o 00:03:12.170 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.170 CXX test/cpp_headers/blobfs.o 00:03:12.170 CXX test/cpp_headers/conf.o 00:03:12.170 CXX test/cpp_headers/blob.o 00:03:12.170 CC examples/nvme/arbitration/arbitration.o 00:03:12.170 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:12.170 CC examples/util/zipf/zipf.o 00:03:12.170 CXX test/cpp_headers/crc32.o 00:03:12.170 CXX test/cpp_headers/crc16.o 00:03:12.170 CXX test/cpp_headers/cpuset.o 00:03:12.170 CC examples/ioat/perf/perf.o 00:03:12.170 CXX test/cpp_headers/config.o 00:03:12.170 CC examples/vmd/led/led.o 00:03:12.170 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.170 CXX test/cpp_headers/dif.o 00:03:12.170 CXX test/cpp_headers/crc64.o 00:03:12.170 CXX test/cpp_headers/dma.o 00:03:12.170 CXX test/cpp_headers/endian.o 00:03:12.170 CXX test/cpp_headers/env_dpdk.o 00:03:12.170 CXX test/cpp_headers/event.o 00:03:12.170 CXX test/cpp_headers/env.o 00:03:12.170 CXX test/cpp_headers/fd_group.o 00:03:12.170 CXX test/cpp_headers/fd.o 00:03:12.170 CC app/fio/nvme/fio_plugin.o 00:03:12.170 CXX test/cpp_headers/file.o 00:03:12.170 CC examples/idxd/perf/perf.o 00:03:12.170 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:12.170 CXX test/cpp_headers/ftl.o 00:03:12.170 CXX test/cpp_headers/gpt_spec.o 00:03:12.170 CXX test/cpp_headers/hexlify.o 00:03:12.170 CXX test/cpp_headers/histogram_data.o 00:03:12.170 CC examples/nvme/hello_world/hello_world.o 00:03:12.170 CC test/app/jsoncat/jsoncat.o 00:03:12.170 CC test/app/stub/stub.o 00:03:12.170 CXX test/cpp_headers/idxd.o 00:03:12.170 CC test/app/histogram_perf/histogram_perf.o 00:03:12.170 CXX test/cpp_headers/idxd_spec.o 00:03:12.170 CC examples/nvme/hotplug/hotplug.o 00:03:12.170 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.170 CC test/thread/poller_perf/poller_perf.o 00:03:12.170 CC examples/nvme/abort/abort.o 00:03:12.170 CC test/env/memory/memory_ut.o 00:03:12.170 CC test/nvme/overhead/overhead.o 00:03:12.170 CC test/nvme/sgl/sgl.o 00:03:12.170 CC test/env/pci/pci_ut.o 00:03:12.170 CC test/nvme/aer/aer.o 00:03:12.170 CC test/env/vtophys/vtophys.o 00:03:12.170 CC examples/bdev/bdevperf/bdevperf.o 00:03:12.170 CC examples/bdev/hello_world/hello_bdev.o 00:03:12.170 CC examples/blob/cli/blobcli.o 00:03:12.170 CC test/nvme/reset/reset.o 00:03:12.170 CC test/nvme/simple_copy/simple_copy.o 00:03:12.170 CC test/nvme/startup/startup.o 00:03:12.170 CC test/event/reactor_perf/reactor_perf.o 00:03:12.170 CC test/nvme/e2edp/nvme_dp.o 00:03:12.170 CC test/nvme/connect_stress/connect_stress.o 00:03:12.170 CC examples/blob/hello_world/hello_blob.o 00:03:12.170 CC test/nvme/reserve/reserve.o 00:03:12.170 CC test/nvme/err_injection/err_injection.o 00:03:12.170 CC test/event/event_perf/event_perf.o 00:03:12.170 CC test/nvme/boot_partition/boot_partition.o 00:03:12.170 CC test/event/reactor/reactor.o 00:03:12.170 CC test/app/bdev_svc/bdev_svc.o 00:03:12.170 CC app/fio/bdev/fio_plugin.o 00:03:12.170 CC test/accel/dif/dif.o 00:03:12.170 CC test/nvme/fdp/fdp.o 00:03:12.170 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:12.170 CC test/nvme/cuse/cuse.o 00:03:12.170 CC test/nvme/compliance/nvme_compliance.o 00:03:12.170 CXX test/cpp_headers/init.o 00:03:12.170 CC test/bdev/bdevio/bdevio.o 00:03:12.170 CC examples/nvmf/nvmf/nvmf.o 00:03:12.170 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:12.170 CC test/event/app_repeat/app_repeat.o 00:03:12.170 CC examples/thread/thread/thread_ex.o 00:03:12.170 CC test/nvme/fused_ordering/fused_ordering.o 00:03:12.170 CC test/blobfs/mkfs/mkfs.o 00:03:12.170 CC test/dma/test_dma/test_dma.o 00:03:12.433 CC test/event/scheduler/scheduler.o 00:03:12.433 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:12.433 LINK spdk_lspci 00:03:12.433 CC test/env/mem_callbacks/mem_callbacks.o 00:03:12.433 CC test/lvol/esnap/esnap.o 00:03:12.433 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:12.433 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:12.433 LINK interrupt_tgt 00:03:12.433 LINK nvmf_tgt 00:03:12.433 LINK spdk_nvme_discover 00:03:12.702 LINK vhost 00:03:12.702 LINK rpc_client_test 00:03:12.702 LINK led 00:03:12.702 LINK histogram_perf 00:03:12.702 LINK lsvmd 00:03:12.702 LINK jsoncat 00:03:12.702 LINK zipf 00:03:12.702 LINK poller_perf 00:03:12.702 LINK spdk_trace_record 00:03:12.702 LINK cmb_copy 00:03:12.702 LINK iscsi_tgt 00:03:12.702 LINK spdk_tgt 00:03:12.702 LINK reactor 00:03:12.702 LINK startup 00:03:12.702 LINK verify 00:03:12.702 LINK app_repeat 00:03:12.702 LINK ioat_perf 00:03:12.702 LINK stub 00:03:12.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:12.702 LINK reactor_perf 00:03:12.702 LINK reserve 00:03:12.702 LINK err_injection 00:03:12.962 LINK boot_partition 00:03:12.962 LINK vtophys 00:03:12.962 CXX test/cpp_headers/ioat.o 00:03:12.962 LINK hello_sock 00:03:12.962 LINK event_perf 00:03:12.962 CXX test/cpp_headers/ioat_spec.o 00:03:12.962 CXX test/cpp_headers/iscsi_spec.o 00:03:12.962 LINK pmr_persistence 00:03:12.962 CXX test/cpp_headers/json.o 00:03:12.962 CXX test/cpp_headers/jsonrpc.o 00:03:12.962 CXX test/cpp_headers/likely.o 00:03:12.962 CXX test/cpp_headers/log.o 00:03:12.962 CXX test/cpp_headers/lvol.o 00:03:12.962 CXX test/cpp_headers/memory.o 00:03:12.962 LINK env_dpdk_post_init 00:03:12.962 CXX test/cpp_headers/mmio.o 00:03:12.962 LINK bdev_svc 00:03:12.962 CXX test/cpp_headers/nbd.o 00:03:12.962 CXX test/cpp_headers/notify.o 00:03:12.962 CXX test/cpp_headers/nvme.o 00:03:12.962 CXX test/cpp_headers/nvme_intel.o 00:03:12.962 LINK doorbell_aers 00:03:12.962 CXX test/cpp_headers/nvme_ocssd.o 00:03:12.962 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:12.962 CXX test/cpp_headers/nvme_spec.o 00:03:12.962 CXX test/cpp_headers/nvme_zns.o 00:03:12.962 LINK connect_stress 00:03:12.962 CXX test/cpp_headers/nvmf_cmd.o 00:03:12.962 LINK hello_bdev 00:03:12.962 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:12.962 CXX test/cpp_headers/nvmf.o 00:03:12.962 LINK mkfs 00:03:12.962 CXX test/cpp_headers/nvmf_spec.o 00:03:12.962 CXX test/cpp_headers/nvmf_transport.o 00:03:12.962 CXX test/cpp_headers/opal.o 00:03:12.962 LINK fused_ordering 00:03:12.962 CXX test/cpp_headers/opal_spec.o 00:03:12.962 CXX test/cpp_headers/pci_ids.o 00:03:12.962 LINK simple_copy 00:03:12.962 LINK nvme_dp 00:03:12.962 LINK hello_blob 00:03:12.962 CXX test/cpp_headers/pipe.o 00:03:12.962 CXX test/cpp_headers/queue.o 00:03:12.962 LINK hello_world 00:03:12.962 LINK hotplug 00:03:12.962 CXX test/cpp_headers/reduce.o 00:03:12.962 CXX test/cpp_headers/rpc.o 00:03:12.962 CXX test/cpp_headers/scheduler.o 00:03:12.962 LINK arbitration 00:03:12.962 LINK nvmf 00:03:12.962 LINK reset 00:03:12.962 LINK thread 00:03:12.962 CXX test/cpp_headers/scsi.o 00:03:12.962 CXX test/cpp_headers/scsi_spec.o 00:03:12.962 CXX test/cpp_headers/sock.o 00:03:12.962 CXX test/cpp_headers/stdinc.o 00:03:12.962 LINK idxd_perf 00:03:12.962 CXX test/cpp_headers/string.o 00:03:12.962 CXX test/cpp_headers/thread.o 00:03:12.962 CXX test/cpp_headers/trace.o 00:03:12.962 LINK fdp 00:03:12.962 CXX test/cpp_headers/trace_parser.o 00:03:12.962 CXX test/cpp_headers/tree.o 00:03:12.962 LINK scheduler 00:03:12.962 LINK nvme_compliance 00:03:12.962 LINK overhead 00:03:12.962 LINK sgl 00:03:12.962 CXX test/cpp_headers/ublk.o 00:03:12.962 LINK aer 00:03:12.962 CXX test/cpp_headers/util.o 00:03:12.962 CXX test/cpp_headers/uuid.o 00:03:13.221 CXX test/cpp_headers/version.o 00:03:13.221 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.221 CXX test/cpp_headers/vfio_user_spec.o 00:03:13.221 LINK abort 00:03:13.221 CXX test/cpp_headers/vhost.o 00:03:13.221 CXX test/cpp_headers/vmd.o 00:03:13.221 CXX test/cpp_headers/xor.o 00:03:13.221 CXX test/cpp_headers/zipf.o 00:03:13.221 LINK spdk_dd 00:03:13.221 LINK reconnect 00:03:13.221 LINK spdk_trace 00:03:13.221 LINK pci_ut 00:03:13.221 LINK test_dma 00:03:13.221 LINK bdevio 00:03:13.221 LINK nvme_manage 00:03:13.221 LINK dif 00:03:13.221 LINK accel_perf 00:03:13.479 LINK spdk_nvme 00:03:13.480 LINK blobcli 00:03:13.480 LINK nvme_fuzz 00:03:13.480 LINK spdk_bdev 00:03:13.480 LINK spdk_top 00:03:13.480 LINK mem_callbacks 00:03:13.480 LINK spdk_nvme_perf 00:03:13.737 LINK spdk_nvme_identify 00:03:13.737 LINK bdevperf 00:03:13.737 LINK vhost_fuzz 00:03:13.737 LINK memory_ut 00:03:13.995 LINK cuse 00:03:14.254 LINK iscsi_fuzz 00:03:16.155 LINK esnap 00:03:16.413 00:03:16.413 real 0m30.693s 00:03:16.413 user 4m52.915s 00:03:16.413 sys 2m48.260s 00:03:16.413 05:05:32 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:16.413 05:05:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.413 ************************************ 00:03:16.413 END TEST make 00:03:16.413 ************************************ 00:03:16.672 05:05:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:16.672 05:05:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:16.672 05:05:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:16.672 05:05:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:16.672 05:05:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:16.672 05:05:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:16.672 05:05:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:16.672 05:05:33 -- scripts/common.sh@335 -- # IFS=.-: 00:03:16.672 05:05:33 -- scripts/common.sh@335 -- # read -ra ver1 00:03:16.672 05:05:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.672 05:05:33 -- scripts/common.sh@336 -- # read -ra ver2 00:03:16.672 05:05:33 -- scripts/common.sh@337 -- # local 'op=<' 00:03:16.672 05:05:33 -- scripts/common.sh@339 -- # ver1_l=2 00:03:16.672 05:05:33 -- scripts/common.sh@340 -- # ver2_l=1 00:03:16.672 05:05:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:16.672 05:05:33 -- scripts/common.sh@343 -- # case "$op" in 00:03:16.672 05:05:33 -- scripts/common.sh@344 -- # : 1 00:03:16.672 05:05:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:16.672 05:05:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.672 05:05:33 -- scripts/common.sh@364 -- # decimal 1 00:03:16.672 05:05:33 -- scripts/common.sh@352 -- # local d=1 00:03:16.672 05:05:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.672 05:05:33 -- scripts/common.sh@354 -- # echo 1 00:03:16.672 05:05:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:16.672 05:05:33 -- scripts/common.sh@365 -- # decimal 2 00:03:16.672 05:05:33 -- scripts/common.sh@352 -- # local d=2 00:03:16.672 05:05:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.672 05:05:33 -- scripts/common.sh@354 -- # echo 2 00:03:16.672 05:05:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:16.672 05:05:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:16.672 05:05:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:16.672 05:05:33 -- scripts/common.sh@367 -- # return 0 00:03:16.672 05:05:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.672 05:05:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.672 --rc genhtml_branch_coverage=1 00:03:16.672 --rc genhtml_function_coverage=1 00:03:16.672 --rc genhtml_legend=1 00:03:16.672 --rc geninfo_all_blocks=1 00:03:16.672 --rc geninfo_unexecuted_blocks=1 00:03:16.672 00:03:16.672 ' 00:03:16.672 05:05:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.672 --rc genhtml_branch_coverage=1 00:03:16.672 --rc genhtml_function_coverage=1 00:03:16.672 --rc genhtml_legend=1 00:03:16.672 --rc geninfo_all_blocks=1 00:03:16.672 --rc geninfo_unexecuted_blocks=1 00:03:16.672 00:03:16.672 ' 00:03:16.672 05:05:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.672 --rc genhtml_branch_coverage=1 00:03:16.672 --rc genhtml_function_coverage=1 00:03:16.672 --rc genhtml_legend=1 00:03:16.672 --rc geninfo_all_blocks=1 00:03:16.672 --rc geninfo_unexecuted_blocks=1 00:03:16.672 00:03:16.672 ' 00:03:16.672 05:05:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.672 --rc genhtml_branch_coverage=1 00:03:16.672 --rc genhtml_function_coverage=1 00:03:16.672 --rc genhtml_legend=1 00:03:16.672 --rc geninfo_all_blocks=1 00:03:16.672 --rc geninfo_unexecuted_blocks=1 00:03:16.672 00:03:16.672 ' 00:03:16.672 05:05:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.672 05:05:33 -- nvmf/common.sh@7 -- # uname -s 00:03:16.672 05:05:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.672 05:05:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.672 05:05:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.672 05:05:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.672 05:05:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.672 05:05:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.672 05:05:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.672 05:05:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.672 05:05:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.672 05:05:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.672 05:05:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:16.672 05:05:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:16.672 05:05:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.672 05:05:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.672 05:05:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:16.672 05:05:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:16.672 05:05:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.672 05:05:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.672 05:05:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.672 05:05:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.672 05:05:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.672 05:05:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.672 05:05:33 -- paths/export.sh@5 -- # export PATH 00:03:16.672 05:05:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.672 05:05:33 -- nvmf/common.sh@46 -- # : 0 00:03:16.672 05:05:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:16.672 05:05:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:16.672 05:05:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:16.672 05:05:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.672 05:05:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.672 05:05:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:16.672 05:05:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:16.672 05:05:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:16.672 05:05:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.672 05:05:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.672 05:05:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.672 05:05:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.672 05:05:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:16.672 05:05:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.672 05:05:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:16.672 05:05:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.672 05:05:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.672 05:05:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.672 05:05:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.672 05:05:33 -- spdk/autotest.sh@48 -- # udevadm_pid=1586436 00:03:16.672 05:05:33 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:16.672 05:05:33 -- spdk/autotest.sh@54 -- # echo 1586438 00:03:16.672 05:05:33 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:16.673 05:05:33 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:16.673 05:05:33 -- spdk/autotest.sh@56 -- # echo 1586439 00:03:16.673 05:05:33 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:16.673 05:05:33 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:16.673 05:05:33 -- spdk/autotest.sh@60 -- # echo 1586440 00:03:16.673 05:05:33 -- spdk/autotest.sh@62 -- # echo 1586441 00:03:16.673 05:05:33 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:16.673 05:05:33 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:16.673 05:05:33 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:16.673 05:05:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:16.673 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:03:16.673 05:05:33 -- spdk/autotest.sh@70 -- # create_test_list 00:03:16.673 05:05:33 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:16.673 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:03:16.673 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:16.673 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:16.673 05:05:33 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:16.673 05:05:33 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:16.673 05:05:33 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:16.673 05:05:33 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:16.673 05:05:33 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:16.673 05:05:33 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:16.673 05:05:33 -- common/autotest_common.sh@1450 -- # uname 00:03:16.673 05:05:33 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:16.673 05:05:33 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:16.673 05:05:33 -- common/autotest_common.sh@1470 -- # uname 00:03:16.673 05:05:33 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:16.673 05:05:33 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:16.673 05:05:33 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:16.931 lcov: LCOV version 1.15 00:03:16.931 05:05:33 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:19.460 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:19.460 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:19.460 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:19.460 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:19.460 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:19.460 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:41.381 05:05:55 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:41.381 05:05:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.381 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.381 05:05:55 -- spdk/autotest.sh@89 -- # rm -f 00:03:41.381 05:05:55 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.318 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:42.318 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:42.318 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:42.318 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:42.318 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:42.318 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:42.318 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:42.318 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:42.576 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:42.576 05:05:59 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:42.576 05:05:59 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:42.576 05:05:59 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:42.576 05:05:59 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:42.576 05:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:42.576 05:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:42.576 05:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:42.577 05:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.577 05:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:42.577 05:05:59 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:42.577 05:05:59 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:42.577 05:05:59 -- spdk/autotest.sh@108 -- # grep -v p 00:03:42.577 05:05:59 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:42.577 05:05:59 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:42.577 05:05:59 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:42.577 05:05:59 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:42.577 05:05:59 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.838 No valid GPT data, bailing 00:03:42.838 05:05:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.838 05:05:59 -- scripts/common.sh@393 -- # pt= 00:03:42.838 05:05:59 -- scripts/common.sh@394 -- # return 1 00:03:42.838 05:05:59 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.838 1+0 records in 00:03:42.838 1+0 records out 00:03:42.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509513 s, 206 MB/s 00:03:42.838 05:05:59 -- spdk/autotest.sh@116 -- # sync 00:03:42.838 05:05:59 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.838 05:05:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.838 05:05:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.003 05:06:06 -- spdk/autotest.sh@122 -- # uname -s 00:03:51.003 05:06:06 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:51.003 05:06:06 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.003 05:06:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.003 05:06:06 -- common/autotest_common.sh@10 -- # set +x 00:03:51.003 ************************************ 00:03:51.003 START TEST setup.sh 00:03:51.003 ************************************ 00:03:51.003 05:06:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.003 * Looking for test storage... 00:03:51.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:51.003 05:06:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.003 05:06:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.003 05:06:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.003 05:06:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.003 05:06:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.003 05:06:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.003 05:06:06 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.003 05:06:06 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.003 05:06:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.003 05:06:06 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.003 05:06:06 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.003 05:06:06 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.003 05:06:06 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.003 05:06:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.003 05:06:06 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.003 05:06:06 -- scripts/common.sh@344 -- # : 1 00:03:51.003 05:06:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.003 05:06:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.003 05:06:06 -- scripts/common.sh@364 -- # decimal 1 00:03:51.003 05:06:06 -- scripts/common.sh@352 -- # local d=1 00:03:51.003 05:06:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.003 05:06:06 -- scripts/common.sh@354 -- # echo 1 00:03:51.003 05:06:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.003 05:06:06 -- scripts/common.sh@365 -- # decimal 2 00:03:51.003 05:06:06 -- scripts/common.sh@352 -- # local d=2 00:03:51.003 05:06:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.003 05:06:06 -- scripts/common.sh@354 -- # echo 2 00:03:51.003 05:06:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.003 05:06:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.003 05:06:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.003 05:06:06 -- scripts/common.sh@367 -- # return 0 00:03:51.003 05:06:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.003 --rc genhtml_branch_coverage=1 00:03:51.003 --rc genhtml_function_coverage=1 00:03:51.003 --rc genhtml_legend=1 00:03:51.003 --rc geninfo_all_blocks=1 00:03:51.003 --rc geninfo_unexecuted_blocks=1 00:03:51.003 00:03:51.003 ' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.003 --rc genhtml_branch_coverage=1 00:03:51.003 --rc genhtml_function_coverage=1 00:03:51.003 --rc genhtml_legend=1 00:03:51.003 --rc geninfo_all_blocks=1 00:03:51.003 --rc geninfo_unexecuted_blocks=1 00:03:51.003 00:03:51.003 ' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.003 --rc genhtml_branch_coverage=1 00:03:51.003 --rc genhtml_function_coverage=1 00:03:51.003 --rc genhtml_legend=1 00:03:51.003 --rc geninfo_all_blocks=1 00:03:51.003 --rc geninfo_unexecuted_blocks=1 00:03:51.003 00:03:51.003 ' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.003 --rc genhtml_branch_coverage=1 00:03:51.003 --rc genhtml_function_coverage=1 00:03:51.003 --rc genhtml_legend=1 00:03:51.003 --rc geninfo_all_blocks=1 00:03:51.003 --rc geninfo_unexecuted_blocks=1 00:03:51.003 00:03:51.003 ' 00:03:51.003 05:06:06 -- setup/test-setup.sh@10 -- # uname -s 00:03:51.003 05:06:06 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:51.003 05:06:06 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:51.003 05:06:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.003 05:06:06 -- common/autotest_common.sh@10 -- # set +x 00:03:51.003 ************************************ 00:03:51.003 START TEST acl 00:03:51.003 ************************************ 00:03:51.003 05:06:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:51.003 * Looking for test storage... 00:03:51.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:51.003 05:06:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.003 05:06:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.003 05:06:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.003 05:06:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.003 05:06:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.003 05:06:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.003 05:06:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.003 05:06:06 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.003 05:06:06 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.003 05:06:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.003 05:06:06 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.003 05:06:06 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.003 05:06:06 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.003 05:06:06 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.003 05:06:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.003 05:06:06 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.003 05:06:06 -- scripts/common.sh@344 -- # : 1 00:03:51.003 05:06:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.003 05:06:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.004 05:06:06 -- scripts/common.sh@364 -- # decimal 1 00:03:51.004 05:06:06 -- scripts/common.sh@352 -- # local d=1 00:03:51.004 05:06:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.004 05:06:06 -- scripts/common.sh@354 -- # echo 1 00:03:51.004 05:06:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.004 05:06:06 -- scripts/common.sh@365 -- # decimal 2 00:03:51.004 05:06:06 -- scripts/common.sh@352 -- # local d=2 00:03:51.004 05:06:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.004 05:06:06 -- scripts/common.sh@354 -- # echo 2 00:03:51.004 05:06:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.004 05:06:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.004 05:06:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.004 05:06:06 -- scripts/common.sh@367 -- # return 0 00:03:51.004 05:06:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.004 05:06:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.004 --rc genhtml_branch_coverage=1 00:03:51.004 --rc genhtml_function_coverage=1 00:03:51.004 --rc genhtml_legend=1 00:03:51.004 --rc geninfo_all_blocks=1 00:03:51.004 --rc geninfo_unexecuted_blocks=1 00:03:51.004 00:03:51.004 ' 00:03:51.004 05:06:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.004 --rc genhtml_branch_coverage=1 00:03:51.004 --rc genhtml_function_coverage=1 00:03:51.004 --rc genhtml_legend=1 00:03:51.004 --rc geninfo_all_blocks=1 00:03:51.004 --rc geninfo_unexecuted_blocks=1 00:03:51.004 00:03:51.004 ' 00:03:51.004 05:06:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.004 --rc genhtml_branch_coverage=1 00:03:51.004 --rc genhtml_function_coverage=1 00:03:51.004 --rc genhtml_legend=1 00:03:51.004 --rc geninfo_all_blocks=1 00:03:51.004 --rc geninfo_unexecuted_blocks=1 00:03:51.004 00:03:51.004 ' 00:03:51.004 05:06:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.004 --rc genhtml_branch_coverage=1 00:03:51.004 --rc genhtml_function_coverage=1 00:03:51.004 --rc genhtml_legend=1 00:03:51.004 --rc geninfo_all_blocks=1 00:03:51.004 --rc geninfo_unexecuted_blocks=1 00:03:51.004 00:03:51.004 ' 00:03:51.004 05:06:06 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:51.004 05:06:06 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:51.004 05:06:06 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:51.004 05:06:06 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:51.004 05:06:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.004 05:06:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:51.004 05:06:06 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:51.004 05:06:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.004 05:06:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.004 05:06:06 -- setup/acl.sh@12 -- # devs=() 00:03:51.004 05:06:06 -- setup/acl.sh@12 -- # declare -a devs 00:03:51.004 05:06:06 -- setup/acl.sh@13 -- # drivers=() 00:03:51.004 05:06:06 -- setup/acl.sh@13 -- # declare -A drivers 00:03:51.004 05:06:06 -- setup/acl.sh@51 -- # setup reset 00:03:51.004 05:06:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.004 05:06:06 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.350 05:06:10 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:54.350 05:06:10 -- setup/acl.sh@16 -- # local dev driver 00:03:54.350 05:06:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.350 05:06:10 -- setup/acl.sh@15 -- # setup output status 00:03:54.350 05:06:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.350 05:06:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:57.634 Hugepages 00:03:57.634 node hugesize free / total 00:03:57.634 05:06:13 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.634 05:06:13 -- setup/acl.sh@19 -- # continue 00:03:57.634 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.634 05:06:13 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.634 05:06:13 -- setup/acl.sh@19 -- # continue 00:03:57.634 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.634 05:06:13 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.634 05:06:13 -- setup/acl.sh@19 -- # continue 00:03:57.634 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.634 00:03:57.635 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:13 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.635 05:06:13 -- setup/acl.sh@20 -- # continue 00:03:57.635 05:06:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:14 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:57.635 05:06:14 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:57.635 05:06:14 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:57.635 05:06:14 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:57.635 05:06:14 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:57.635 05:06:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.635 05:06:14 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:57.635 05:06:14 -- setup/acl.sh@54 -- # run_test denied denied 00:03:57.635 05:06:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.635 05:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.635 05:06:14 -- common/autotest_common.sh@10 -- # set +x 00:03:57.635 ************************************ 00:03:57.635 START TEST denied 00:03:57.635 ************************************ 00:03:57.635 05:06:14 -- common/autotest_common.sh@1114 -- # denied 00:03:57.635 05:06:14 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:57.635 05:06:14 -- setup/acl.sh@38 -- # setup output config 00:03:57.635 05:06:14 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:57.635 05:06:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.635 05:06:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:01.826 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:01.826 05:06:17 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:01.826 05:06:17 -- setup/acl.sh@28 -- # local dev driver 00:04:01.826 05:06:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.826 05:06:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:01.826 05:06:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:01.826 05:06:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.826 05:06:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.826 05:06:17 -- setup/acl.sh@41 -- # setup reset 00:04:01.826 05:06:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.826 05:06:17 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.097 00:04:07.097 real 0m8.506s 00:04:07.097 user 0m2.787s 00:04:07.097 sys 0m5.115s 00:04:07.097 05:06:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.097 05:06:22 -- common/autotest_common.sh@10 -- # set +x 00:04:07.097 ************************************ 00:04:07.097 END TEST denied 00:04:07.097 ************************************ 00:04:07.097 05:06:22 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:07.097 05:06:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.097 05:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.097 05:06:22 -- common/autotest_common.sh@10 -- # set +x 00:04:07.097 ************************************ 00:04:07.097 START TEST allowed 00:04:07.097 ************************************ 00:04:07.097 05:06:22 -- common/autotest_common.sh@1114 -- # allowed 00:04:07.097 05:06:22 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:07.097 05:06:22 -- setup/acl.sh@45 -- # setup output config 00:04:07.097 05:06:22 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:07.097 05:06:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.097 05:06:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:11.284 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.284 05:06:27 -- setup/acl.sh@47 -- # verify 00:04:11.284 05:06:27 -- setup/acl.sh@28 -- # local dev driver 00:04:11.284 05:06:27 -- setup/acl.sh@48 -- # setup reset 00:04:11.284 05:06:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.284 05:06:27 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.470 00:04:15.470 real 0m8.883s 00:04:15.470 user 0m2.151s 00:04:15.470 sys 0m4.705s 00:04:15.470 05:06:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.470 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.470 ************************************ 00:04:15.470 END TEST allowed 00:04:15.470 ************************************ 00:04:15.470 00:04:15.470 real 0m25.009s 00:04:15.470 user 0m7.715s 00:04:15.470 sys 0m14.974s 00:04:15.470 05:06:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.470 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.470 ************************************ 00:04:15.470 END TEST acl 00:04:15.470 ************************************ 00:04:15.470 05:06:31 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:15.470 05:06:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.470 05:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.470 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.470 ************************************ 00:04:15.470 START TEST hugepages 00:04:15.470 ************************************ 00:04:15.470 05:06:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:15.470 * Looking for test storage... 00:04:15.470 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:15.470 05:06:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:15.470 05:06:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:15.470 05:06:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:15.470 05:06:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:15.470 05:06:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:15.470 05:06:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:15.470 05:06:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:15.470 05:06:31 -- scripts/common.sh@335 -- # IFS=.-: 00:04:15.470 05:06:31 -- scripts/common.sh@335 -- # read -ra ver1 00:04:15.470 05:06:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.470 05:06:31 -- scripts/common.sh@336 -- # read -ra ver2 00:04:15.470 05:06:31 -- scripts/common.sh@337 -- # local 'op=<' 00:04:15.470 05:06:31 -- scripts/common.sh@339 -- # ver1_l=2 00:04:15.470 05:06:31 -- scripts/common.sh@340 -- # ver2_l=1 00:04:15.470 05:06:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:15.470 05:06:31 -- scripts/common.sh@343 -- # case "$op" in 00:04:15.470 05:06:31 -- scripts/common.sh@344 -- # : 1 00:04:15.470 05:06:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:15.470 05:06:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.470 05:06:31 -- scripts/common.sh@364 -- # decimal 1 00:04:15.470 05:06:31 -- scripts/common.sh@352 -- # local d=1 00:04:15.470 05:06:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.470 05:06:31 -- scripts/common.sh@354 -- # echo 1 00:04:15.470 05:06:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:15.470 05:06:31 -- scripts/common.sh@365 -- # decimal 2 00:04:15.470 05:06:31 -- scripts/common.sh@352 -- # local d=2 00:04:15.470 05:06:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.470 05:06:31 -- scripts/common.sh@354 -- # echo 2 00:04:15.470 05:06:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:15.470 05:06:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:15.470 05:06:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:15.470 05:06:31 -- scripts/common.sh@367 -- # return 0 00:04:15.470 05:06:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.470 05:06:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:15.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.470 --rc genhtml_branch_coverage=1 00:04:15.470 --rc genhtml_function_coverage=1 00:04:15.470 --rc genhtml_legend=1 00:04:15.470 --rc geninfo_all_blocks=1 00:04:15.470 --rc geninfo_unexecuted_blocks=1 00:04:15.470 00:04:15.470 ' 00:04:15.470 05:06:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:15.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.470 --rc genhtml_branch_coverage=1 00:04:15.470 --rc genhtml_function_coverage=1 00:04:15.470 --rc genhtml_legend=1 00:04:15.470 --rc geninfo_all_blocks=1 00:04:15.470 --rc geninfo_unexecuted_blocks=1 00:04:15.470 00:04:15.470 ' 00:04:15.470 05:06:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:15.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.470 --rc genhtml_branch_coverage=1 00:04:15.470 --rc genhtml_function_coverage=1 00:04:15.470 --rc genhtml_legend=1 00:04:15.470 --rc geninfo_all_blocks=1 00:04:15.470 --rc geninfo_unexecuted_blocks=1 00:04:15.470 00:04:15.470 ' 00:04:15.470 05:06:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:15.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.470 --rc genhtml_branch_coverage=1 00:04:15.470 --rc genhtml_function_coverage=1 00:04:15.470 --rc genhtml_legend=1 00:04:15.470 --rc geninfo_all_blocks=1 00:04:15.470 --rc geninfo_unexecuted_blocks=1 00:04:15.470 00:04:15.470 ' 00:04:15.470 05:06:31 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:15.470 05:06:31 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:15.470 05:06:31 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:15.470 05:06:31 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:15.470 05:06:31 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:15.471 05:06:31 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:15.471 05:06:31 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:15.471 05:06:31 -- setup/common.sh@18 -- # local node= 00:04:15.471 05:06:31 -- setup/common.sh@19 -- # local var val 00:04:15.471 05:06:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.471 05:06:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.471 05:06:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.471 05:06:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.471 05:06:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.471 05:06:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 40747300 kB' 'MemAvailable: 44462408 kB' 'Buffers: 4100 kB' 'Cached: 11234808 kB' 'SwapCached: 0 kB' 'Active: 8011996 kB' 'Inactive: 3698740 kB' 'Active(anon): 7622036 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475344 kB' 'Mapped: 196568 kB' 'Shmem: 7150208 kB' 'KReclaimable: 246548 kB' 'Slab: 1032324 kB' 'SReclaimable: 246548 kB' 'SUnreclaim: 785776 kB' 'KernelStack: 21920 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433332 kB' 'Committed_AS: 8801908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217660 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.471 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.471 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # continue 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.472 05:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.472 05:06:31 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.472 05:06:31 -- setup/common.sh@33 -- # echo 2048 00:04:15.472 05:06:31 -- setup/common.sh@33 -- # return 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:15.472 05:06:31 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:15.472 05:06:31 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:15.472 05:06:31 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:15.472 05:06:31 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:15.472 05:06:31 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:15.472 05:06:31 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:15.472 05:06:31 -- setup/hugepages.sh@207 -- # get_nodes 00:04:15.472 05:06:31 -- setup/hugepages.sh@27 -- # local node 00:04:15.472 05:06:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.472 05:06:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:15.472 05:06:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.472 05:06:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.472 05:06:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.472 05:06:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.472 05:06:31 -- setup/hugepages.sh@208 -- # clear_hp 00:04:15.472 05:06:31 -- setup/hugepages.sh@37 -- # local node hp 00:04:15.472 05:06:31 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.472 05:06:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.472 05:06:31 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.472 05:06:31 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.472 05:06:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.472 05:06:31 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.472 05:06:31 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.472 05:06:31 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.472 05:06:31 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:15.472 05:06:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.472 05:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.472 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.472 ************************************ 00:04:15.472 START TEST default_setup 00:04:15.472 ************************************ 00:04:15.472 05:06:31 -- common/autotest_common.sh@1114 -- # default_setup 00:04:15.472 05:06:31 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.472 05:06:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:15.472 05:06:31 -- setup/hugepages.sh@51 -- # shift 00:04:15.472 05:06:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:15.472 05:06:31 -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.472 05:06:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.472 05:06:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.472 05:06:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:15.472 05:06:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.472 05:06:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.472 05:06:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.472 05:06:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.472 05:06:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.472 05:06:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:15.472 05:06:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.472 05:06:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:15.472 05:06:31 -- setup/hugepages.sh@73 -- # return 0 00:04:15.472 05:06:31 -- setup/hugepages.sh@137 -- # setup output 00:04:15.472 05:06:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.472 05:06:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:18.760 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.760 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.019 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.019 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.019 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.019 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.019 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.019 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.019 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.924 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:20.924 05:06:37 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:20.924 05:06:37 -- setup/hugepages.sh@89 -- # local node 00:04:20.924 05:06:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.924 05:06:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.924 05:06:37 -- setup/hugepages.sh@92 -- # local surp 00:04:20.924 05:06:37 -- setup/hugepages.sh@93 -- # local resv 00:04:20.924 05:06:37 -- setup/hugepages.sh@94 -- # local anon 00:04:20.924 05:06:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.924 05:06:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.924 05:06:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.924 05:06:37 -- setup/common.sh@18 -- # local node= 00:04:20.924 05:06:37 -- setup/common.sh@19 -- # local var val 00:04:20.924 05:06:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.924 05:06:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.924 05:06:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.924 05:06:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.924 05:06:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.924 05:06:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.924 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.924 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42888636 kB' 'MemAvailable: 46603696 kB' 'Buffers: 4100 kB' 'Cached: 11234944 kB' 'SwapCached: 0 kB' 'Active: 8019124 kB' 'Inactive: 3698740 kB' 'Active(anon): 7629164 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482252 kB' 'Mapped: 197524 kB' 'Shmem: 7150344 kB' 'KReclaimable: 246452 kB' 'Slab: 1031012 kB' 'SReclaimable: 246452 kB' 'SUnreclaim: 784560 kB' 'KernelStack: 22080 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8811180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218016 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.925 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.925 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.926 05:06:37 -- setup/common.sh@33 -- # echo 0 00:04:20.926 05:06:37 -- setup/common.sh@33 -- # return 0 00:04:20.926 05:06:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:20.926 05:06:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.926 05:06:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.926 05:06:37 -- setup/common.sh@18 -- # local node= 00:04:20.926 05:06:37 -- setup/common.sh@19 -- # local var val 00:04:20.926 05:06:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.926 05:06:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.926 05:06:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.926 05:06:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.926 05:06:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.926 05:06:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42893312 kB' 'MemAvailable: 46608372 kB' 'Buffers: 4100 kB' 'Cached: 11234948 kB' 'SwapCached: 0 kB' 'Active: 8015948 kB' 'Inactive: 3698740 kB' 'Active(anon): 7625988 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479016 kB' 'Mapped: 197236 kB' 'Shmem: 7150348 kB' 'KReclaimable: 246452 kB' 'Slab: 1031004 kB' 'SReclaimable: 246452 kB' 'SUnreclaim: 784552 kB' 'KernelStack: 22048 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8805840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.926 05:06:37 -- setup/common.sh@32 -- # continue 00:04:20.926 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.188 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.188 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.189 05:06:37 -- setup/common.sh@33 -- # echo 0 00:04:21.189 05:06:37 -- setup/common.sh@33 -- # return 0 00:04:21.189 05:06:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:21.189 05:06:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.189 05:06:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.189 05:06:37 -- setup/common.sh@18 -- # local node= 00:04:21.189 05:06:37 -- setup/common.sh@19 -- # local var val 00:04:21.189 05:06:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.189 05:06:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.189 05:06:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.189 05:06:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.189 05:06:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.189 05:06:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42893532 kB' 'MemAvailable: 46608592 kB' 'Buffers: 4100 kB' 'Cached: 11234960 kB' 'SwapCached: 0 kB' 'Active: 8019364 kB' 'Inactive: 3698740 kB' 'Active(anon): 7629404 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482344 kB' 'Mapped: 197488 kB' 'Shmem: 7150360 kB' 'KReclaimable: 246452 kB' 'Slab: 1031004 kB' 'SReclaimable: 246452 kB' 'SUnreclaim: 784552 kB' 'KernelStack: 21968 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8811208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217984 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.189 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.189 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.190 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.190 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.190 05:06:37 -- setup/common.sh@33 -- # echo 0 00:04:21.190 05:06:37 -- setup/common.sh@33 -- # return 0 00:04:21.190 05:06:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:21.190 05:06:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.190 nr_hugepages=1024 00:04:21.190 05:06:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.190 resv_hugepages=0 00:04:21.190 05:06:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.190 surplus_hugepages=0 00:04:21.190 05:06:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.190 anon_hugepages=0 00:04:21.191 05:06:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.191 05:06:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.191 05:06:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.191 05:06:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.191 05:06:37 -- setup/common.sh@18 -- # local node= 00:04:21.191 05:06:37 -- setup/common.sh@19 -- # local var val 00:04:21.191 05:06:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.191 05:06:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.191 05:06:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.191 05:06:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.191 05:06:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.191 05:06:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42897428 kB' 'MemAvailable: 46612488 kB' 'Buffers: 4100 kB' 'Cached: 11234972 kB' 'SwapCached: 0 kB' 'Active: 8013372 kB' 'Inactive: 3698740 kB' 'Active(anon): 7623412 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476372 kB' 'Mapped: 196664 kB' 'Shmem: 7150372 kB' 'KReclaimable: 246452 kB' 'Slab: 1031004 kB' 'SReclaimable: 246452 kB' 'SUnreclaim: 784552 kB' 'KernelStack: 22064 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8805104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.191 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.191 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.192 05:06:37 -- setup/common.sh@33 -- # echo 1024 00:04:21.192 05:06:37 -- setup/common.sh@33 -- # return 0 00:04:21.192 05:06:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.192 05:06:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.192 05:06:37 -- setup/hugepages.sh@27 -- # local node 00:04:21.192 05:06:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.192 05:06:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.192 05:06:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.192 05:06:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.192 05:06:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.192 05:06:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.192 05:06:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.192 05:06:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.192 05:06:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.192 05:06:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.192 05:06:37 -- setup/common.sh@18 -- # local node=0 00:04:21.192 05:06:37 -- setup/common.sh@19 -- # local var val 00:04:21.192 05:06:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.192 05:06:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.192 05:06:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.192 05:06:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.192 05:06:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.192 05:06:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25930756 kB' 'MemUsed: 6654612 kB' 'SwapCached: 0 kB' 'Active: 2836640 kB' 'Inactive: 176568 kB' 'Active(anon): 2652356 kB' 'Inactive(anon): 0 kB' 'Active(file): 184284 kB' 'Inactive(file): 176568 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2712104 kB' 'Mapped: 116664 kB' 'AnonPages: 304376 kB' 'Shmem: 2351252 kB' 'KernelStack: 12360 kB' 'PageTables: 4780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106936 kB' 'Slab: 497724 kB' 'SReclaimable: 106936 kB' 'SUnreclaim: 390788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.192 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.192 05:06:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # continue 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.193 05:06:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.193 05:06:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.193 05:06:37 -- setup/common.sh@33 -- # echo 0 00:04:21.193 05:06:37 -- setup/common.sh@33 -- # return 0 00:04:21.193 05:06:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.193 05:06:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.193 05:06:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.193 05:06:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.193 05:06:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.193 node0=1024 expecting 1024 00:04:21.193 05:06:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.193 00:04:21.193 real 0m5.712s 00:04:21.193 user 0m1.420s 00:04:21.193 sys 0m2.420s 00:04:21.193 05:06:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.193 05:06:37 -- common/autotest_common.sh@10 -- # set +x 00:04:21.193 ************************************ 00:04:21.193 END TEST default_setup 00:04:21.193 ************************************ 00:04:21.193 05:06:37 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:21.193 05:06:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.193 05:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.193 05:06:37 -- common/autotest_common.sh@10 -- # set +x 00:04:21.193 ************************************ 00:04:21.193 START TEST per_node_1G_alloc 00:04:21.193 ************************************ 00:04:21.193 05:06:37 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:21.193 05:06:37 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:21.193 05:06:37 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:21.193 05:06:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:21.193 05:06:37 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:21.193 05:06:37 -- setup/hugepages.sh@51 -- # shift 00:04:21.193 05:06:37 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:21.193 05:06:37 -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.193 05:06:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.193 05:06:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:21.193 05:06:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:21.193 05:06:37 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:21.193 05:06:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.193 05:06:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:21.193 05:06:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.193 05:06:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.193 05:06:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.193 05:06:37 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:21.193 05:06:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.194 05:06:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:21.194 05:06:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.194 05:06:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:21.194 05:06:37 -- setup/hugepages.sh@73 -- # return 0 00:04:21.194 05:06:37 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:21.194 05:06:37 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:21.194 05:06:37 -- setup/hugepages.sh@146 -- # setup output 00:04:21.194 05:06:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.194 05:06:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:24.480 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.480 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.480 05:06:41 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:24.480 05:06:41 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:24.480 05:06:41 -- setup/hugepages.sh@89 -- # local node 00:04:24.480 05:06:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.480 05:06:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.480 05:06:41 -- setup/hugepages.sh@92 -- # local surp 00:04:24.480 05:06:41 -- setup/hugepages.sh@93 -- # local resv 00:04:24.480 05:06:41 -- setup/hugepages.sh@94 -- # local anon 00:04:24.480 05:06:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.480 05:06:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.480 05:06:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.480 05:06:41 -- setup/common.sh@18 -- # local node= 00:04:24.480 05:06:41 -- setup/common.sh@19 -- # local var val 00:04:24.480 05:06:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.480 05:06:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.480 05:06:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.480 05:06:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.480 05:06:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.480 05:06:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.743 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.743 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42916080 kB' 'MemAvailable: 46631124 kB' 'Buffers: 4100 kB' 'Cached: 11235068 kB' 'SwapCached: 0 kB' 'Active: 8014692 kB' 'Inactive: 3698740 kB' 'Active(anon): 7624732 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477612 kB' 'Mapped: 195564 kB' 'Shmem: 7150468 kB' 'KReclaimable: 246420 kB' 'Slab: 1030916 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784496 kB' 'KernelStack: 21888 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8793848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217820 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.744 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.744 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.745 05:06:41 -- setup/common.sh@33 -- # echo 0 00:04:24.745 05:06:41 -- setup/common.sh@33 -- # return 0 00:04:24.745 05:06:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.745 05:06:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.745 05:06:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.745 05:06:41 -- setup/common.sh@18 -- # local node= 00:04:24.745 05:06:41 -- setup/common.sh@19 -- # local var val 00:04:24.745 05:06:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.745 05:06:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.745 05:06:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.745 05:06:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.745 05:06:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.745 05:06:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42916968 kB' 'MemAvailable: 46632012 kB' 'Buffers: 4100 kB' 'Cached: 11235068 kB' 'SwapCached: 0 kB' 'Active: 8014052 kB' 'Inactive: 3698740 kB' 'Active(anon): 7624092 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477016 kB' 'Mapped: 195592 kB' 'Shmem: 7150468 kB' 'KReclaimable: 246420 kB' 'Slab: 1030904 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784484 kB' 'KernelStack: 21856 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8793860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217788 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.745 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.745 05:06:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.746 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.746 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.747 05:06:41 -- setup/common.sh@33 -- # echo 0 00:04:24.747 05:06:41 -- setup/common.sh@33 -- # return 0 00:04:24.747 05:06:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.747 05:06:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.747 05:06:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.747 05:06:41 -- setup/common.sh@18 -- # local node= 00:04:24.747 05:06:41 -- setup/common.sh@19 -- # local var val 00:04:24.747 05:06:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.747 05:06:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.747 05:06:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.747 05:06:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.747 05:06:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.747 05:06:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.747 05:06:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42917332 kB' 'MemAvailable: 46632376 kB' 'Buffers: 4100 kB' 'Cached: 11235080 kB' 'SwapCached: 0 kB' 'Active: 8013920 kB' 'Inactive: 3698740 kB' 'Active(anon): 7623960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476852 kB' 'Mapped: 195516 kB' 'Shmem: 7150480 kB' 'KReclaimable: 246420 kB' 'Slab: 1030864 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784444 kB' 'KernelStack: 21856 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8793876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217788 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.747 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.747 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.748 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.748 05:06:41 -- setup/common.sh@33 -- # echo 0 00:04:24.748 05:06:41 -- setup/common.sh@33 -- # return 0 00:04:24.748 05:06:41 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.748 05:06:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.748 nr_hugepages=1024 00:04:24.748 05:06:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.748 resv_hugepages=0 00:04:24.748 05:06:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.748 surplus_hugepages=0 00:04:24.748 05:06:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.748 anon_hugepages=0 00:04:24.748 05:06:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.748 05:06:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.748 05:06:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.748 05:06:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.748 05:06:41 -- setup/common.sh@18 -- # local node= 00:04:24.748 05:06:41 -- setup/common.sh@19 -- # local var val 00:04:24.748 05:06:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.748 05:06:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.748 05:06:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.748 05:06:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.748 05:06:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.748 05:06:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.748 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42918104 kB' 'MemAvailable: 46633148 kB' 'Buffers: 4100 kB' 'Cached: 11235092 kB' 'SwapCached: 0 kB' 'Active: 8013404 kB' 'Inactive: 3698740 kB' 'Active(anon): 7623444 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476288 kB' 'Mapped: 195516 kB' 'Shmem: 7150492 kB' 'KReclaimable: 246420 kB' 'Slab: 1030864 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784444 kB' 'KernelStack: 21840 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8793888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217788 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.749 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.749 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.750 05:06:41 -- setup/common.sh@33 -- # echo 1024 00:04:24.750 05:06:41 -- setup/common.sh@33 -- # return 0 00:04:24.750 05:06:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.750 05:06:41 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.750 05:06:41 -- setup/hugepages.sh@27 -- # local node 00:04:24.750 05:06:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.750 05:06:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.750 05:06:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.750 05:06:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.750 05:06:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.750 05:06:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.750 05:06:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.750 05:06:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.750 05:06:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.750 05:06:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.750 05:06:41 -- setup/common.sh@18 -- # local node=0 00:04:24.750 05:06:41 -- setup/common.sh@19 -- # local var val 00:04:24.750 05:06:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.750 05:06:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.750 05:06:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.750 05:06:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.750 05:06:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.750 05:06:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26989628 kB' 'MemUsed: 5595740 kB' 'SwapCached: 0 kB' 'Active: 2836620 kB' 'Inactive: 176568 kB' 'Active(anon): 2652336 kB' 'Inactive(anon): 0 kB' 'Active(file): 184284 kB' 'Inactive(file): 176568 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2712176 kB' 'Mapped: 115600 kB' 'AnonPages: 304264 kB' 'Shmem: 2351324 kB' 'KernelStack: 12296 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106912 kB' 'Slab: 497780 kB' 'SReclaimable: 106912 kB' 'SUnreclaim: 390868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.750 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.750 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.751 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.751 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@33 -- # echo 0 00:04:24.752 05:06:41 -- setup/common.sh@33 -- # return 0 00:04:24.752 05:06:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.752 05:06:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.752 05:06:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.752 05:06:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:24.752 05:06:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.752 05:06:41 -- setup/common.sh@18 -- # local node=1 00:04:24.752 05:06:41 -- setup/common.sh@19 -- # local var val 00:04:24.752 05:06:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.752 05:06:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.752 05:06:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:24.752 05:06:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:24.752 05:06:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.752 05:06:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698392 kB' 'MemFree: 15928304 kB' 'MemUsed: 11770088 kB' 'SwapCached: 0 kB' 'Active: 5176628 kB' 'Inactive: 3522172 kB' 'Active(anon): 4970952 kB' 'Inactive(anon): 0 kB' 'Active(file): 205676 kB' 'Inactive(file): 3522172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8527056 kB' 'Mapped: 79916 kB' 'AnonPages: 171872 kB' 'Shmem: 4799208 kB' 'KernelStack: 9560 kB' 'PageTables: 3232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139508 kB' 'Slab: 533084 kB' 'SReclaimable: 139508 kB' 'SUnreclaim: 393576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.752 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.752 05:06:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # continue 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.753 05:06:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.753 05:06:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.753 05:06:41 -- setup/common.sh@33 -- # echo 0 00:04:24.753 05:06:41 -- setup/common.sh@33 -- # return 0 00:04:24.753 05:06:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.753 05:06:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.753 05:06:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.753 05:06:41 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.753 node0=512 expecting 512 00:04:24.753 05:06:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.753 05:06:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.753 05:06:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.753 05:06:41 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:24.753 node1=512 expecting 512 00:04:24.753 05:06:41 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:24.753 00:04:24.753 real 0m3.582s 00:04:24.753 user 0m1.346s 00:04:24.753 sys 0m2.289s 00:04:24.753 05:06:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.753 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:04:24.753 ************************************ 00:04:24.753 END TEST per_node_1G_alloc 00:04:24.753 ************************************ 00:04:24.753 05:06:41 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:24.753 05:06:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.753 05:06:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.753 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:04:24.753 ************************************ 00:04:24.753 START TEST even_2G_alloc 00:04:24.753 ************************************ 00:04:24.753 05:06:41 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:24.753 05:06:41 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:24.753 05:06:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.753 05:06:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.753 05:06:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.753 05:06:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.753 05:06:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.753 05:06:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.753 05:06:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.753 05:06:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.753 05:06:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.753 05:06:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.753 05:06:41 -- setup/hugepages.sh@83 -- # : 512 00:04:24.753 05:06:41 -- setup/hugepages.sh@84 -- # : 1 00:04:24.753 05:06:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.753 05:06:41 -- setup/hugepages.sh@83 -- # : 0 00:04:24.753 05:06:41 -- setup/hugepages.sh@84 -- # : 0 00:04:24.753 05:06:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.753 05:06:41 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:24.753 05:06:41 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:24.753 05:06:41 -- setup/hugepages.sh@153 -- # setup output 00:04:24.753 05:06:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.753 05:06:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:28.950 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:28.950 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:28.950 05:06:44 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:28.950 05:06:44 -- setup/hugepages.sh@89 -- # local node 00:04:28.950 05:06:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.950 05:06:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.950 05:06:44 -- setup/hugepages.sh@92 -- # local surp 00:04:28.950 05:06:44 -- setup/hugepages.sh@93 -- # local resv 00:04:28.950 05:06:44 -- setup/hugepages.sh@94 -- # local anon 00:04:28.950 05:06:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.950 05:06:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.950 05:06:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.950 05:06:44 -- setup/common.sh@18 -- # local node= 00:04:28.950 05:06:44 -- setup/common.sh@19 -- # local var val 00:04:28.950 05:06:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.950 05:06:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.951 05:06:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.951 05:06:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.951 05:06:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.951 05:06:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42910192 kB' 'MemAvailable: 46625236 kB' 'Buffers: 4100 kB' 'Cached: 11235196 kB' 'SwapCached: 0 kB' 'Active: 8013556 kB' 'Inactive: 3698740 kB' 'Active(anon): 7623596 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475900 kB' 'Mapped: 195604 kB' 'Shmem: 7150596 kB' 'KReclaimable: 246420 kB' 'Slab: 1031152 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784732 kB' 'KernelStack: 21888 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8794504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217804 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.952 05:06:44 -- setup/common.sh@33 -- # echo 0 00:04:28.952 05:06:44 -- setup/common.sh@33 -- # return 0 00:04:28.952 05:06:44 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.952 05:06:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.952 05:06:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.952 05:06:44 -- setup/common.sh@18 -- # local node= 00:04:28.952 05:06:44 -- setup/common.sh@19 -- # local var val 00:04:28.952 05:06:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.952 05:06:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.952 05:06:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.952 05:06:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.952 05:06:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.952 05:06:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42910704 kB' 'MemAvailable: 46625748 kB' 'Buffers: 4100 kB' 'Cached: 11235200 kB' 'SwapCached: 0 kB' 'Active: 8013112 kB' 'Inactive: 3698740 kB' 'Active(anon): 7623152 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475956 kB' 'Mapped: 195520 kB' 'Shmem: 7150600 kB' 'KReclaimable: 246420 kB' 'Slab: 1031112 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784692 kB' 'KernelStack: 21856 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8794516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217820 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 05:06:44 -- setup/common.sh@33 -- # echo 0 00:04:28.953 05:06:44 -- setup/common.sh@33 -- # return 0 00:04:28.953 05:06:44 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.953 05:06:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.953 05:06:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.953 05:06:44 -- setup/common.sh@18 -- # local node= 00:04:28.953 05:06:44 -- setup/common.sh@19 -- # local var val 00:04:28.953 05:06:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.953 05:06:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.953 05:06:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.953 05:06:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.953 05:06:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.953 05:06:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42911536 kB' 'MemAvailable: 46626580 kB' 'Buffers: 4100 kB' 'Cached: 11235212 kB' 'SwapCached: 0 kB' 'Active: 8013124 kB' 'Inactive: 3698740 kB' 'Active(anon): 7623164 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475956 kB' 'Mapped: 195520 kB' 'Shmem: 7150612 kB' 'KReclaimable: 246420 kB' 'Slab: 1031112 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784692 kB' 'KernelStack: 21856 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8794532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 05:06:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.955 05:06:44 -- setup/common.sh@33 -- # echo 0 00:04:28.955 05:06:44 -- setup/common.sh@33 -- # return 0 00:04:28.955 05:06:44 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.955 05:06:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.955 nr_hugepages=1024 00:04:28.955 05:06:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.955 resv_hugepages=0 00:04:28.955 05:06:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.955 surplus_hugepages=0 00:04:28.955 05:06:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.955 anon_hugepages=0 00:04:28.955 05:06:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.955 05:06:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.955 05:06:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.955 05:06:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.955 05:06:44 -- setup/common.sh@18 -- # local node= 00:04:28.955 05:06:44 -- setup/common.sh@19 -- # local var val 00:04:28.955 05:06:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.955 05:06:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.955 05:06:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.955 05:06:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.955 05:06:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.955 05:06:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42912256 kB' 'MemAvailable: 46627300 kB' 'Buffers: 4100 kB' 'Cached: 11235236 kB' 'SwapCached: 0 kB' 'Active: 8012780 kB' 'Inactive: 3698740 kB' 'Active(anon): 7622820 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475548 kB' 'Mapped: 195520 kB' 'Shmem: 7150636 kB' 'KReclaimable: 246420 kB' 'Slab: 1031112 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784692 kB' 'KernelStack: 21840 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8794544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.955 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.955 05:06:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.956 05:06:45 -- setup/common.sh@33 -- # echo 1024 00:04:28.956 05:06:45 -- setup/common.sh@33 -- # return 0 00:04:28.956 05:06:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.956 05:06:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.956 05:06:45 -- setup/hugepages.sh@27 -- # local node 00:04:28.956 05:06:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.956 05:06:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:28.956 05:06:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.956 05:06:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:28.956 05:06:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:28.956 05:06:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.956 05:06:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.956 05:06:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.956 05:06:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.956 05:06:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.956 05:06:45 -- setup/common.sh@18 -- # local node=0 00:04:28.956 05:06:45 -- setup/common.sh@19 -- # local var val 00:04:28.956 05:06:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.956 05:06:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.956 05:06:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.956 05:06:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.956 05:06:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.956 05:06:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26987636 kB' 'MemUsed: 5597732 kB' 'SwapCached: 0 kB' 'Active: 2838028 kB' 'Inactive: 176568 kB' 'Active(anon): 2653744 kB' 'Inactive(anon): 0 kB' 'Active(file): 184284 kB' 'Inactive(file): 176568 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2712220 kB' 'Mapped: 115608 kB' 'AnonPages: 305740 kB' 'Shmem: 2351368 kB' 'KernelStack: 12360 kB' 'PageTables: 4720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106912 kB' 'Slab: 497876 kB' 'SReclaimable: 106912 kB' 'SUnreclaim: 390964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.956 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.956 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.957 05:06:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.957 05:06:45 -- setup/common.sh@33 -- # echo 0 00:04:28.957 05:06:45 -- setup/common.sh@33 -- # return 0 00:04:28.957 05:06:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.957 05:06:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.957 05:06:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.957 05:06:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:28.957 05:06:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.957 05:06:45 -- setup/common.sh@18 -- # local node=1 00:04:28.957 05:06:45 -- setup/common.sh@19 -- # local var val 00:04:28.957 05:06:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.957 05:06:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.957 05:06:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:28.957 05:06:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:28.957 05:06:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.957 05:06:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.957 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698392 kB' 'MemFree: 15923620 kB' 'MemUsed: 11774772 kB' 'SwapCached: 0 kB' 'Active: 5175804 kB' 'Inactive: 3522172 kB' 'Active(anon): 4970128 kB' 'Inactive(anon): 0 kB' 'Active(file): 205676 kB' 'Inactive(file): 3522172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8527132 kB' 'Mapped: 79912 kB' 'AnonPages: 170884 kB' 'Shmem: 4799284 kB' 'KernelStack: 9512 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139508 kB' 'Slab: 533236 kB' 'SReclaimable: 139508 kB' 'SUnreclaim: 393728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # continue 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.958 05:06:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.958 05:06:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.958 05:06:45 -- setup/common.sh@33 -- # echo 0 00:04:28.958 05:06:45 -- setup/common.sh@33 -- # return 0 00:04:28.958 05:06:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.958 05:06:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.958 05:06:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.958 05:06:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.958 05:06:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:28.958 node0=512 expecting 512 00:04:28.958 05:06:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.958 05:06:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.958 05:06:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.958 05:06:45 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:28.958 node1=512 expecting 512 00:04:28.959 05:06:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:28.959 00:04:28.959 real 0m3.777s 00:04:28.959 user 0m1.404s 00:04:28.959 sys 0m2.448s 00:04:28.959 05:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.959 05:06:45 -- common/autotest_common.sh@10 -- # set +x 00:04:28.959 ************************************ 00:04:28.959 END TEST even_2G_alloc 00:04:28.959 ************************************ 00:04:28.959 05:06:45 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:28.959 05:06:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.959 05:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.959 05:06:45 -- common/autotest_common.sh@10 -- # set +x 00:04:28.959 ************************************ 00:04:28.959 START TEST odd_alloc 00:04:28.959 ************************************ 00:04:28.959 05:06:45 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:28.959 05:06:45 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:28.959 05:06:45 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:28.959 05:06:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.959 05:06:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.959 05:06:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:28.959 05:06:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.959 05:06:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.959 05:06:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.959 05:06:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:28.959 05:06:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.959 05:06:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.959 05:06:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.959 05:06:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.959 05:06:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:28.959 05:06:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.959 05:06:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:28.959 05:06:45 -- setup/hugepages.sh@83 -- # : 513 00:04:28.959 05:06:45 -- setup/hugepages.sh@84 -- # : 1 00:04:28.959 05:06:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.959 05:06:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:28.959 05:06:45 -- setup/hugepages.sh@83 -- # : 0 00:04:28.959 05:06:45 -- setup/hugepages.sh@84 -- # : 0 00:04:28.959 05:06:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.959 05:06:45 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:28.959 05:06:45 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:28.959 05:06:45 -- setup/hugepages.sh@160 -- # setup output 00:04:28.959 05:06:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.959 05:06:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:32.251 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.251 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.251 05:06:48 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:32.251 05:06:48 -- setup/hugepages.sh@89 -- # local node 00:04:32.251 05:06:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.251 05:06:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.251 05:06:48 -- setup/hugepages.sh@92 -- # local surp 00:04:32.251 05:06:48 -- setup/hugepages.sh@93 -- # local resv 00:04:32.251 05:06:48 -- setup/hugepages.sh@94 -- # local anon 00:04:32.251 05:06:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.251 05:06:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.251 05:06:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.251 05:06:48 -- setup/common.sh@18 -- # local node= 00:04:32.251 05:06:48 -- setup/common.sh@19 -- # local var val 00:04:32.251 05:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.251 05:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.251 05:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.251 05:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.251 05:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.251 05:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42935708 kB' 'MemAvailable: 46651256 kB' 'Buffers: 4100 kB' 'Cached: 11235324 kB' 'SwapCached: 0 kB' 'Active: 8014912 kB' 'Inactive: 3698740 kB' 'Active(anon): 7624952 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477456 kB' 'Mapped: 195540 kB' 'Shmem: 7150724 kB' 'KReclaimable: 246420 kB' 'Slab: 1030900 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784480 kB' 'KernelStack: 22080 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8799688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.251 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.251 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.252 05:06:48 -- setup/common.sh@33 -- # echo 0 00:04:32.252 05:06:48 -- setup/common.sh@33 -- # return 0 00:04:32.252 05:06:48 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.252 05:06:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.252 05:06:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.252 05:06:48 -- setup/common.sh@18 -- # local node= 00:04:32.252 05:06:48 -- setup/common.sh@19 -- # local var val 00:04:32.252 05:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.252 05:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.252 05:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.252 05:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.252 05:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.252 05:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42937368 kB' 'MemAvailable: 46652412 kB' 'Buffers: 4100 kB' 'Cached: 11235328 kB' 'SwapCached: 0 kB' 'Active: 8015092 kB' 'Inactive: 3698740 kB' 'Active(anon): 7625132 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477664 kB' 'Mapped: 195532 kB' 'Shmem: 7150728 kB' 'KReclaimable: 246420 kB' 'Slab: 1030940 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784520 kB' 'KernelStack: 21952 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8799700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.252 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.252 05:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.253 05:06:48 -- setup/common.sh@33 -- # echo 0 00:04:32.253 05:06:48 -- setup/common.sh@33 -- # return 0 00:04:32.253 05:06:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.253 05:06:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.253 05:06:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.253 05:06:48 -- setup/common.sh@18 -- # local node= 00:04:32.253 05:06:48 -- setup/common.sh@19 -- # local var val 00:04:32.253 05:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.253 05:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.253 05:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.253 05:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.253 05:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.253 05:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42935712 kB' 'MemAvailable: 46650756 kB' 'Buffers: 4100 kB' 'Cached: 11235340 kB' 'SwapCached: 0 kB' 'Active: 8015372 kB' 'Inactive: 3698740 kB' 'Active(anon): 7625412 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477924 kB' 'Mapped: 195532 kB' 'Shmem: 7150740 kB' 'KReclaimable: 246420 kB' 'Slab: 1031132 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784712 kB' 'KernelStack: 22080 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8799468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218044 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.253 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.253 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.254 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.254 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.254 05:06:48 -- setup/common.sh@33 -- # echo 0 00:04:32.254 05:06:48 -- setup/common.sh@33 -- # return 0 00:04:32.254 05:06:48 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.254 05:06:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:32.254 nr_hugepages=1025 00:04:32.254 05:06:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.254 resv_hugepages=0 00:04:32.254 05:06:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.254 surplus_hugepages=0 00:04:32.254 05:06:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.254 anon_hugepages=0 00:04:32.254 05:06:48 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.254 05:06:48 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:32.254 05:06:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.254 05:06:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.254 05:06:48 -- setup/common.sh@18 -- # local node= 00:04:32.254 05:06:48 -- setup/common.sh@19 -- # local var val 00:04:32.254 05:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.254 05:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.254 05:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.255 05:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.255 05:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.255 05:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42937472 kB' 'MemAvailable: 46652516 kB' 'Buffers: 4100 kB' 'Cached: 11235352 kB' 'SwapCached: 0 kB' 'Active: 8014920 kB' 'Inactive: 3698740 kB' 'Active(anon): 7624960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477428 kB' 'Mapped: 195532 kB' 'Shmem: 7150752 kB' 'KReclaimable: 246420 kB' 'Slab: 1031068 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 784648 kB' 'KernelStack: 22160 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8799732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218044 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.255 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.255 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.256 05:06:48 -- setup/common.sh@33 -- # echo 1025 00:04:32.256 05:06:48 -- setup/common.sh@33 -- # return 0 00:04:32.256 05:06:48 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.256 05:06:48 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.256 05:06:48 -- setup/hugepages.sh@27 -- # local node 00:04:32.256 05:06:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.256 05:06:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.256 05:06:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.256 05:06:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:32.256 05:06:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.256 05:06:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.256 05:06:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.256 05:06:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.256 05:06:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.256 05:06:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.256 05:06:48 -- setup/common.sh@18 -- # local node=0 00:04:32.256 05:06:48 -- setup/common.sh@19 -- # local var val 00:04:32.256 05:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.256 05:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.256 05:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.256 05:06:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.256 05:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.256 05:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27011076 kB' 'MemUsed: 5574292 kB' 'SwapCached: 0 kB' 'Active: 2837484 kB' 'Inactive: 176568 kB' 'Active(anon): 2653200 kB' 'Inactive(anon): 0 kB' 'Active(file): 184284 kB' 'Inactive(file): 176568 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2712240 kB' 'Mapped: 115616 kB' 'AnonPages: 304888 kB' 'Shmem: 2351388 kB' 'KernelStack: 12360 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106912 kB' 'Slab: 497760 kB' 'SReclaimable: 106912 kB' 'SUnreclaim: 390848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.256 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.256 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@33 -- # echo 0 00:04:32.257 05:06:48 -- setup/common.sh@33 -- # return 0 00:04:32.257 05:06:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.257 05:06:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.257 05:06:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.257 05:06:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:32.257 05:06:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.257 05:06:48 -- setup/common.sh@18 -- # local node=1 00:04:32.257 05:06:48 -- setup/common.sh@19 -- # local var val 00:04:32.257 05:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.257 05:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.257 05:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:32.257 05:06:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:32.257 05:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.257 05:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698392 kB' 'MemFree: 15925464 kB' 'MemUsed: 11772928 kB' 'SwapCached: 0 kB' 'Active: 5177580 kB' 'Inactive: 3522172 kB' 'Active(anon): 4971904 kB' 'Inactive(anon): 0 kB' 'Active(file): 205676 kB' 'Inactive(file): 3522172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8527244 kB' 'Mapped: 79916 kB' 'AnonPages: 172616 kB' 'Shmem: 4799396 kB' 'KernelStack: 9688 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139508 kB' 'Slab: 533308 kB' 'SReclaimable: 139508 kB' 'SUnreclaim: 393800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # continue 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.257 05:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.257 05:06:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.257 05:06:48 -- setup/common.sh@33 -- # echo 0 00:04:32.258 05:06:48 -- setup/common.sh@33 -- # return 0 00:04:32.258 05:06:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.258 05:06:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.258 05:06:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.258 05:06:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:32.258 node0=512 expecting 513 00:04:32.258 05:06:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.258 05:06:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.258 05:06:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.258 05:06:48 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:32.258 node1=513 expecting 512 00:04:32.258 05:06:48 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:32.258 00:04:32.258 real 0m3.419s 00:04:32.258 user 0m1.305s 00:04:32.258 sys 0m2.143s 00:04:32.258 05:06:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:32.258 05:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.258 ************************************ 00:04:32.258 END TEST odd_alloc 00:04:32.258 ************************************ 00:04:32.258 05:06:48 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:32.258 05:06:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:32.258 05:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:32.258 05:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.258 ************************************ 00:04:32.258 START TEST custom_alloc 00:04:32.258 ************************************ 00:04:32.258 05:06:48 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:32.258 05:06:48 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:32.258 05:06:48 -- setup/hugepages.sh@169 -- # local node 00:04:32.258 05:06:48 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:32.258 05:06:48 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:32.258 05:06:48 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:32.258 05:06:48 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:32.258 05:06:48 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.258 05:06:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.258 05:06:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.258 05:06:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.258 05:06:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.258 05:06:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.258 05:06:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.258 05:06:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.258 05:06:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.258 05:06:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:32.258 05:06:48 -- setup/hugepages.sh@83 -- # : 256 00:04:32.258 05:06:48 -- setup/hugepages.sh@84 -- # : 1 00:04:32.258 05:06:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:32.258 05:06:48 -- setup/hugepages.sh@83 -- # : 0 00:04:32.258 05:06:48 -- setup/hugepages.sh@84 -- # : 0 00:04:32.258 05:06:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:32.258 05:06:48 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:32.258 05:06:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.258 05:06:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.258 05:06:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.258 05:06:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.258 05:06:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.258 05:06:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.258 05:06:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.258 05:06:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.258 05:06:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.258 05:06:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.258 05:06:48 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:32.258 05:06:48 -- setup/hugepages.sh@78 -- # return 0 00:04:32.258 05:06:48 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:32.258 05:06:48 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.258 05:06:48 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.258 05:06:48 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.258 05:06:48 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.258 05:06:48 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:32.258 05:06:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.258 05:06:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.258 05:06:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.258 05:06:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.258 05:06:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.258 05:06:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.258 05:06:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:32.258 05:06:48 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.258 05:06:48 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:32.258 05:06:48 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.258 05:06:48 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:32.258 05:06:48 -- setup/hugepages.sh@78 -- # return 0 00:04:32.258 05:06:48 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:32.258 05:06:48 -- setup/hugepages.sh@187 -- # setup output 00:04:32.258 05:06:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.258 05:06:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:35.543 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.543 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.543 05:06:51 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:35.543 05:06:51 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:35.543 05:06:51 -- setup/hugepages.sh@89 -- # local node 00:04:35.543 05:06:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.543 05:06:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.543 05:06:51 -- setup/hugepages.sh@92 -- # local surp 00:04:35.543 05:06:51 -- setup/hugepages.sh@93 -- # local resv 00:04:35.543 05:06:51 -- setup/hugepages.sh@94 -- # local anon 00:04:35.543 05:06:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.543 05:06:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.543 05:06:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.543 05:06:51 -- setup/common.sh@18 -- # local node= 00:04:35.543 05:06:51 -- setup/common.sh@19 -- # local var val 00:04:35.543 05:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.543 05:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.543 05:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.543 05:06:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.543 05:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.543 05:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41895960 kB' 'MemAvailable: 45611004 kB' 'Buffers: 4100 kB' 'Cached: 11235468 kB' 'SwapCached: 0 kB' 'Active: 8015752 kB' 'Inactive: 3698740 kB' 'Active(anon): 7625792 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477772 kB' 'Mapped: 195612 kB' 'Shmem: 7150868 kB' 'KReclaimable: 246420 kB' 'Slab: 1031504 kB' 'SReclaimable: 246420 kB' 'SUnreclaim: 785084 kB' 'KernelStack: 21904 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8795816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.543 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.543 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.544 05:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.544 05:06:51 -- setup/common.sh@33 -- # echo 0 00:04:35.544 05:06:51 -- setup/common.sh@33 -- # return 0 00:04:35.544 05:06:51 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.544 05:06:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.544 05:06:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.544 05:06:51 -- setup/common.sh@18 -- # local node= 00:04:35.544 05:06:51 -- setup/common.sh@19 -- # local var val 00:04:35.544 05:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.544 05:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.544 05:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.544 05:06:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.544 05:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.544 05:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.544 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41897908 kB' 'MemAvailable: 45612932 kB' 'Buffers: 4100 kB' 'Cached: 11235472 kB' 'SwapCached: 0 kB' 'Active: 8014788 kB' 'Inactive: 3698740 kB' 'Active(anon): 7624828 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477280 kB' 'Mapped: 195532 kB' 'Shmem: 7150872 kB' 'KReclaimable: 246380 kB' 'Slab: 1031372 kB' 'SReclaimable: 246380 kB' 'SUnreclaim: 784992 kB' 'KernelStack: 21872 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8795828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.545 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.545 05:06:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.546 05:06:51 -- setup/common.sh@33 -- # echo 0 00:04:35.546 05:06:51 -- setup/common.sh@33 -- # return 0 00:04:35.546 05:06:51 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.546 05:06:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.546 05:06:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.546 05:06:51 -- setup/common.sh@18 -- # local node= 00:04:35.546 05:06:51 -- setup/common.sh@19 -- # local var val 00:04:35.546 05:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.546 05:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.546 05:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.546 05:06:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.546 05:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.546 05:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41897656 kB' 'MemAvailable: 45612680 kB' 'Buffers: 4100 kB' 'Cached: 11235472 kB' 'SwapCached: 0 kB' 'Active: 8014828 kB' 'Inactive: 3698740 kB' 'Active(anon): 7624868 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477316 kB' 'Mapped: 195532 kB' 'Shmem: 7150872 kB' 'KReclaimable: 246380 kB' 'Slab: 1031372 kB' 'SReclaimable: 246380 kB' 'SUnreclaim: 784992 kB' 'KernelStack: 21888 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8795844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.546 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.546 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:51 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.547 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.547 05:06:52 -- setup/common.sh@33 -- # echo 0 00:04:35.547 05:06:52 -- setup/common.sh@33 -- # return 0 00:04:35.547 05:06:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.547 05:06:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:35.547 nr_hugepages=1536 00:04:35.547 05:06:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.547 resv_hugepages=0 00:04:35.547 05:06:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.547 surplus_hugepages=0 00:04:35.547 05:06:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.547 anon_hugepages=0 00:04:35.547 05:06:52 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.547 05:06:52 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:35.547 05:06:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.547 05:06:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.547 05:06:52 -- setup/common.sh@18 -- # local node= 00:04:35.547 05:06:52 -- setup/common.sh@19 -- # local var val 00:04:35.547 05:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.547 05:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.547 05:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.547 05:06:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.547 05:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.547 05:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.547 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41897744 kB' 'MemAvailable: 45612768 kB' 'Buffers: 4100 kB' 'Cached: 11235476 kB' 'SwapCached: 0 kB' 'Active: 8014872 kB' 'Inactive: 3698740 kB' 'Active(anon): 7624912 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477356 kB' 'Mapped: 195532 kB' 'Shmem: 7150876 kB' 'KReclaimable: 246380 kB' 'Slab: 1031372 kB' 'SReclaimable: 246380 kB' 'SUnreclaim: 784992 kB' 'KernelStack: 21856 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8795492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.548 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.548 05:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.549 05:06:52 -- setup/common.sh@33 -- # echo 1536 00:04:35.549 05:06:52 -- setup/common.sh@33 -- # return 0 00:04:35.549 05:06:52 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.549 05:06:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.549 05:06:52 -- setup/hugepages.sh@27 -- # local node 00:04:35.549 05:06:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.549 05:06:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.549 05:06:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.549 05:06:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.549 05:06:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.549 05:06:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.549 05:06:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.549 05:06:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.549 05:06:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.549 05:06:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.549 05:06:52 -- setup/common.sh@18 -- # local node=0 00:04:35.549 05:06:52 -- setup/common.sh@19 -- # local var val 00:04:35.549 05:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.549 05:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.549 05:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.549 05:06:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.549 05:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.549 05:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27025448 kB' 'MemUsed: 5559920 kB' 'SwapCached: 0 kB' 'Active: 2838140 kB' 'Inactive: 176568 kB' 'Active(anon): 2653856 kB' 'Inactive(anon): 0 kB' 'Active(file): 184284 kB' 'Inactive(file): 176568 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2712316 kB' 'Mapped: 115620 kB' 'AnonPages: 305640 kB' 'Shmem: 2351464 kB' 'KernelStack: 12312 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106872 kB' 'Slab: 498004 kB' 'SReclaimable: 106872 kB' 'SUnreclaim: 391132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.549 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.549 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@33 -- # echo 0 00:04:35.550 05:06:52 -- setup/common.sh@33 -- # return 0 00:04:35.550 05:06:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.550 05:06:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.550 05:06:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.550 05:06:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.550 05:06:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.550 05:06:52 -- setup/common.sh@18 -- # local node=1 00:04:35.550 05:06:52 -- setup/common.sh@19 -- # local var val 00:04:35.550 05:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.550 05:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.550 05:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.550 05:06:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.550 05:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.550 05:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698392 kB' 'MemFree: 14870356 kB' 'MemUsed: 12828036 kB' 'SwapCached: 0 kB' 'Active: 5176580 kB' 'Inactive: 3522172 kB' 'Active(anon): 4970904 kB' 'Inactive(anon): 0 kB' 'Active(file): 205676 kB' 'Inactive(file): 3522172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8527300 kB' 'Mapped: 79912 kB' 'AnonPages: 171520 kB' 'Shmem: 4799452 kB' 'KernelStack: 9560 kB' 'PageTables: 3092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139508 kB' 'Slab: 533352 kB' 'SReclaimable: 139508 kB' 'SUnreclaim: 393844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.550 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.550 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.551 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.551 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.810 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.810 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.810 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.810 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.810 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.810 05:06:52 -- setup/common.sh@32 -- # continue 00:04:35.810 05:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.810 05:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.810 05:06:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.810 05:06:52 -- setup/common.sh@33 -- # echo 0 00:04:35.810 05:06:52 -- setup/common.sh@33 -- # return 0 00:04:35.810 05:06:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.810 05:06:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.810 05:06:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.810 05:06:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.810 05:06:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.810 node0=512 expecting 512 00:04:35.810 05:06:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.810 05:06:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.810 05:06:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.810 05:06:52 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:35.810 node1=1024 expecting 1024 00:04:35.810 05:06:52 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:35.810 00:04:35.810 real 0m3.517s 00:04:35.810 user 0m1.283s 00:04:35.810 sys 0m2.286s 00:04:35.810 05:06:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.810 05:06:52 -- common/autotest_common.sh@10 -- # set +x 00:04:35.810 ************************************ 00:04:35.810 END TEST custom_alloc 00:04:35.810 ************************************ 00:04:35.810 05:06:52 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:35.810 05:06:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.810 05:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.810 05:06:52 -- common/autotest_common.sh@10 -- # set +x 00:04:35.810 ************************************ 00:04:35.810 START TEST no_shrink_alloc 00:04:35.810 ************************************ 00:04:35.810 05:06:52 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:35.810 05:06:52 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:35.810 05:06:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.810 05:06:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.810 05:06:52 -- setup/hugepages.sh@51 -- # shift 00:04:35.810 05:06:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.810 05:06:52 -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.810 05:06:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.810 05:06:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.810 05:06:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.810 05:06:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.810 05:06:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.810 05:06:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.810 05:06:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.810 05:06:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.810 05:06:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.810 05:06:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.810 05:06:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.810 05:06:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.810 05:06:52 -- setup/hugepages.sh@73 -- # return 0 00:04:35.810 05:06:52 -- setup/hugepages.sh@198 -- # setup output 00:04:35.810 05:06:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.810 05:06:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:39.101 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.101 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.101 05:06:55 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:39.101 05:06:55 -- setup/hugepages.sh@89 -- # local node 00:04:39.101 05:06:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.101 05:06:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.101 05:06:55 -- setup/hugepages.sh@92 -- # local surp 00:04:39.101 05:06:55 -- setup/hugepages.sh@93 -- # local resv 00:04:39.101 05:06:55 -- setup/hugepages.sh@94 -- # local anon 00:04:39.101 05:06:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.101 05:06:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.101 05:06:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.101 05:06:55 -- setup/common.sh@18 -- # local node= 00:04:39.101 05:06:55 -- setup/common.sh@19 -- # local var val 00:04:39.101 05:06:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.101 05:06:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.101 05:06:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.101 05:06:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.101 05:06:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.101 05:06:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42914476 kB' 'MemAvailable: 46629496 kB' 'Buffers: 4100 kB' 'Cached: 11235592 kB' 'SwapCached: 0 kB' 'Active: 8015992 kB' 'Inactive: 3698740 kB' 'Active(anon): 7626032 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478120 kB' 'Mapped: 195552 kB' 'Shmem: 7150992 kB' 'KReclaimable: 246372 kB' 'Slab: 1031216 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784844 kB' 'KernelStack: 21888 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8798472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.101 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.101 05:06:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.102 05:06:55 -- setup/common.sh@33 -- # echo 0 00:04:39.102 05:06:55 -- setup/common.sh@33 -- # return 0 00:04:39.102 05:06:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:39.102 05:06:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.102 05:06:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.102 05:06:55 -- setup/common.sh@18 -- # local node= 00:04:39.102 05:06:55 -- setup/common.sh@19 -- # local var val 00:04:39.102 05:06:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.102 05:06:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.102 05:06:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.102 05:06:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.102 05:06:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.102 05:06:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.102 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.102 05:06:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42919944 kB' 'MemAvailable: 46634964 kB' 'Buffers: 4100 kB' 'Cached: 11235596 kB' 'SwapCached: 0 kB' 'Active: 8015568 kB' 'Inactive: 3698740 kB' 'Active(anon): 7625608 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477796 kB' 'Mapped: 195540 kB' 'Shmem: 7150996 kB' 'KReclaimable: 246372 kB' 'Slab: 1031228 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784856 kB' 'KernelStack: 21840 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8796236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217868 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.103 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.103 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.104 05:06:55 -- setup/common.sh@33 -- # echo 0 00:04:39.104 05:06:55 -- setup/common.sh@33 -- # return 0 00:04:39.104 05:06:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:39.104 05:06:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.104 05:06:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.104 05:06:55 -- setup/common.sh@18 -- # local node= 00:04:39.104 05:06:55 -- setup/common.sh@19 -- # local var val 00:04:39.104 05:06:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.104 05:06:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.104 05:06:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.104 05:06:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.104 05:06:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.104 05:06:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42920336 kB' 'MemAvailable: 46635356 kB' 'Buffers: 4100 kB' 'Cached: 11235608 kB' 'SwapCached: 0 kB' 'Active: 8015528 kB' 'Inactive: 3698740 kB' 'Active(anon): 7625568 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477812 kB' 'Mapped: 195540 kB' 'Shmem: 7151008 kB' 'KReclaimable: 246372 kB' 'Slab: 1031260 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784888 kB' 'KernelStack: 21840 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8796252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217804 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.104 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.104 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.105 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.105 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.106 05:06:55 -- setup/common.sh@33 -- # echo 0 00:04:39.106 05:06:55 -- setup/common.sh@33 -- # return 0 00:04:39.106 05:06:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:39.106 05:06:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.106 nr_hugepages=1024 00:04:39.106 05:06:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.106 resv_hugepages=0 00:04:39.106 05:06:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.106 surplus_hugepages=0 00:04:39.106 05:06:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.106 anon_hugepages=0 00:04:39.106 05:06:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.106 05:06:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.106 05:06:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.106 05:06:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.106 05:06:55 -- setup/common.sh@18 -- # local node= 00:04:39.106 05:06:55 -- setup/common.sh@19 -- # local var val 00:04:39.106 05:06:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.106 05:06:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.106 05:06:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.106 05:06:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.106 05:06:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.106 05:06:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.106 05:06:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42919468 kB' 'MemAvailable: 46634488 kB' 'Buffers: 4100 kB' 'Cached: 11235636 kB' 'SwapCached: 0 kB' 'Active: 8015116 kB' 'Inactive: 3698740 kB' 'Active(anon): 7625156 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477336 kB' 'Mapped: 195540 kB' 'Shmem: 7151036 kB' 'KReclaimable: 246372 kB' 'Slab: 1031264 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784892 kB' 'KernelStack: 21840 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8796272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217804 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.106 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.106 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.107 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.107 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.108 05:06:55 -- setup/common.sh@33 -- # echo 1024 00:04:39.108 05:06:55 -- setup/common.sh@33 -- # return 0 00:04:39.108 05:06:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.108 05:06:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.108 05:06:55 -- setup/hugepages.sh@27 -- # local node 00:04:39.108 05:06:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.108 05:06:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.108 05:06:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.108 05:06:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:39.108 05:06:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.108 05:06:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.108 05:06:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.108 05:06:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.108 05:06:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.108 05:06:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.108 05:06:55 -- setup/common.sh@18 -- # local node=0 00:04:39.108 05:06:55 -- setup/common.sh@19 -- # local var val 00:04:39.108 05:06:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.108 05:06:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.108 05:06:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.108 05:06:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.108 05:06:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.108 05:06:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25972256 kB' 'MemUsed: 6613112 kB' 'SwapCached: 0 kB' 'Active: 2837504 kB' 'Inactive: 176568 kB' 'Active(anon): 2653220 kB' 'Inactive(anon): 0 kB' 'Active(file): 184284 kB' 'Inactive(file): 176568 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2712408 kB' 'Mapped: 115628 kB' 'AnonPages: 304784 kB' 'Shmem: 2351556 kB' 'KernelStack: 12264 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106864 kB' 'Slab: 497828 kB' 'SReclaimable: 106864 kB' 'SUnreclaim: 390964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.108 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.108 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # continue 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.109 05:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.109 05:06:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.109 05:06:55 -- setup/common.sh@33 -- # echo 0 00:04:39.109 05:06:55 -- setup/common.sh@33 -- # return 0 00:04:39.109 05:06:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.109 05:06:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.109 05:06:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.109 05:06:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.109 05:06:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.109 node0=1024 expecting 1024 00:04:39.109 05:06:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.109 05:06:55 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:39.109 05:06:55 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:39.109 05:06:55 -- setup/hugepages.sh@202 -- # setup output 00:04:39.109 05:06:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.109 05:06:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:42.424 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:42.424 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:42.425 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.425 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:42.686 05:06:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:42.686 05:06:59 -- setup/hugepages.sh@89 -- # local node 00:04:42.686 05:06:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.686 05:06:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.686 05:06:59 -- setup/hugepages.sh@92 -- # local surp 00:04:42.686 05:06:59 -- setup/hugepages.sh@93 -- # local resv 00:04:42.686 05:06:59 -- setup/hugepages.sh@94 -- # local anon 00:04:42.686 05:06:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.686 05:06:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.686 05:06:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.686 05:06:59 -- setup/common.sh@18 -- # local node= 00:04:42.686 05:06:59 -- setup/common.sh@19 -- # local var val 00:04:42.686 05:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.686 05:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.686 05:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.686 05:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.686 05:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.686 05:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.686 05:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42928052 kB' 'MemAvailable: 46643072 kB' 'Buffers: 4100 kB' 'Cached: 11235708 kB' 'SwapCached: 0 kB' 'Active: 8016944 kB' 'Inactive: 3698740 kB' 'Active(anon): 7626984 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479228 kB' 'Mapped: 195572 kB' 'Shmem: 7151108 kB' 'KReclaimable: 246372 kB' 'Slab: 1031044 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784672 kB' 'KernelStack: 21952 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.686 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.686 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.687 05:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.687 05:06:59 -- setup/common.sh@33 -- # echo 0 00:04:42.687 05:06:59 -- setup/common.sh@33 -- # return 0 00:04:42.687 05:06:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.687 05:06:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.687 05:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.687 05:06:59 -- setup/common.sh@18 -- # local node= 00:04:42.687 05:06:59 -- setup/common.sh@19 -- # local var val 00:04:42.687 05:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.687 05:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.687 05:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.687 05:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.687 05:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.687 05:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.687 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42929740 kB' 'MemAvailable: 46644760 kB' 'Buffers: 4100 kB' 'Cached: 11235716 kB' 'SwapCached: 0 kB' 'Active: 8017004 kB' 'Inactive: 3698740 kB' 'Active(anon): 7627044 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479196 kB' 'Mapped: 195544 kB' 'Shmem: 7151116 kB' 'KReclaimable: 246372 kB' 'Slab: 1031000 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784628 kB' 'KernelStack: 21984 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8801408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.688 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.688 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.689 05:06:59 -- setup/common.sh@33 -- # echo 0 00:04:42.689 05:06:59 -- setup/common.sh@33 -- # return 0 00:04:42.689 05:06:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.689 05:06:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.689 05:06:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.689 05:06:59 -- setup/common.sh@18 -- # local node= 00:04:42.689 05:06:59 -- setup/common.sh@19 -- # local var val 00:04:42.689 05:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.689 05:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.689 05:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.689 05:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.689 05:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.689 05:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.689 05:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42927876 kB' 'MemAvailable: 46642896 kB' 'Buffers: 4100 kB' 'Cached: 11235732 kB' 'SwapCached: 0 kB' 'Active: 8016772 kB' 'Inactive: 3698740 kB' 'Active(anon): 7626812 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478920 kB' 'Mapped: 195544 kB' 'Shmem: 7151132 kB' 'KReclaimable: 246372 kB' 'Slab: 1031000 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784628 kB' 'KernelStack: 22064 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.689 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.689 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.690 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.690 05:06:59 -- setup/common.sh@33 -- # echo 0 00:04:42.690 05:06:59 -- setup/common.sh@33 -- # return 0 00:04:42.690 05:06:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.690 05:06:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.690 nr_hugepages=1024 00:04:42.690 05:06:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.690 resv_hugepages=0 00:04:42.690 05:06:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.690 surplus_hugepages=0 00:04:42.690 05:06:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.690 anon_hugepages=0 00:04:42.690 05:06:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.690 05:06:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.690 05:06:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.690 05:06:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.690 05:06:59 -- setup/common.sh@18 -- # local node= 00:04:42.690 05:06:59 -- setup/common.sh@19 -- # local var val 00:04:42.690 05:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.690 05:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.690 05:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.690 05:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.690 05:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.690 05:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.690 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42927432 kB' 'MemAvailable: 46642452 kB' 'Buffers: 4100 kB' 'Cached: 11235748 kB' 'SwapCached: 0 kB' 'Active: 8016640 kB' 'Inactive: 3698740 kB' 'Active(anon): 7626680 kB' 'Inactive(anon): 0 kB' 'Active(file): 389960 kB' 'Inactive(file): 3698740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478776 kB' 'Mapped: 195544 kB' 'Shmem: 7151148 kB' 'KReclaimable: 246372 kB' 'Slab: 1031000 kB' 'SReclaimable: 246372 kB' 'SUnreclaim: 784628 kB' 'KernelStack: 22096 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8801580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 78400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1983860 kB' 'DirectMap2M: 19722240 kB' 'DirectMap1G: 48234496 kB' 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.691 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.691 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.692 05:06:59 -- setup/common.sh@33 -- # echo 1024 00:04:42.692 05:06:59 -- setup/common.sh@33 -- # return 0 00:04:42.692 05:06:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.692 05:06:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.692 05:06:59 -- setup/hugepages.sh@27 -- # local node 00:04:42.692 05:06:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.692 05:06:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.692 05:06:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.692 05:06:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:42.692 05:06:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.692 05:06:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.692 05:06:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.692 05:06:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.692 05:06:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.692 05:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.692 05:06:59 -- setup/common.sh@18 -- # local node=0 00:04:42.692 05:06:59 -- setup/common.sh@19 -- # local var val 00:04:42.692 05:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.692 05:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.692 05:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.692 05:06:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.692 05:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.692 05:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.692 05:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25984112 kB' 'MemUsed: 6601256 kB' 'SwapCached: 0 kB' 'Active: 2839024 kB' 'Inactive: 176568 kB' 'Active(anon): 2654740 kB' 'Inactive(anon): 0 kB' 'Active(file): 184284 kB' 'Inactive(file): 176568 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2712472 kB' 'Mapped: 115632 kB' 'AnonPages: 306324 kB' 'Shmem: 2351620 kB' 'KernelStack: 12472 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106864 kB' 'Slab: 497680 kB' 'SReclaimable: 106864 kB' 'SUnreclaim: 390816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.692 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.692 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # continue 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.693 05:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.693 05:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.693 05:06:59 -- setup/common.sh@33 -- # echo 0 00:04:42.693 05:06:59 -- setup/common.sh@33 -- # return 0 00:04:42.693 05:06:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.693 05:06:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.693 05:06:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.693 05:06:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.693 05:06:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.693 node0=1024 expecting 1024 00:04:42.693 05:06:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.693 00:04:42.693 real 0m7.035s 00:04:42.693 user 0m2.747s 00:04:42.693 sys 0m4.414s 00:04:42.693 05:06:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.693 05:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:42.693 ************************************ 00:04:42.693 END TEST no_shrink_alloc 00:04:42.693 ************************************ 00:04:42.693 05:06:59 -- setup/hugepages.sh@217 -- # clear_hp 00:04:42.693 05:06:59 -- setup/hugepages.sh@37 -- # local node hp 00:04:42.693 05:06:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.693 05:06:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.693 05:06:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.693 05:06:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.693 05:06:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.693 05:06:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.693 05:06:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.693 05:06:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.693 05:06:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.693 05:06:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.693 05:06:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.693 05:06:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.693 00:04:42.693 real 0m27.595s 00:04:42.693 user 0m9.735s 00:04:42.693 sys 0m16.392s 00:04:42.693 05:06:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.693 05:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:42.693 ************************************ 00:04:42.693 END TEST hugepages 00:04:42.693 ************************************ 00:04:42.951 05:06:59 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:42.951 05:06:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.951 05:06:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.951 05:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:42.951 ************************************ 00:04:42.951 START TEST driver 00:04:42.951 ************************************ 00:04:42.951 05:06:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:42.952 * Looking for test storage... 00:04:42.952 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:42.952 05:06:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:42.952 05:06:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.952 05:06:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.952 05:06:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.952 05:06:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.952 05:06:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.952 05:06:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.952 05:06:59 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.952 05:06:59 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.952 05:06:59 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.952 05:06:59 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.952 05:06:59 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.952 05:06:59 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.952 05:06:59 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.952 05:06:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.952 05:06:59 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.952 05:06:59 -- scripts/common.sh@344 -- # : 1 00:04:42.952 05:06:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.952 05:06:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.952 05:06:59 -- scripts/common.sh@364 -- # decimal 1 00:04:42.952 05:06:59 -- scripts/common.sh@352 -- # local d=1 00:04:42.952 05:06:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.952 05:06:59 -- scripts/common.sh@354 -- # echo 1 00:04:42.952 05:06:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.952 05:06:59 -- scripts/common.sh@365 -- # decimal 2 00:04:42.952 05:06:59 -- scripts/common.sh@352 -- # local d=2 00:04:42.952 05:06:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.952 05:06:59 -- scripts/common.sh@354 -- # echo 2 00:04:42.952 05:06:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.952 05:06:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.952 05:06:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.952 05:06:59 -- scripts/common.sh@367 -- # return 0 00:04:42.952 05:06:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.952 05:06:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.952 --rc genhtml_branch_coverage=1 00:04:42.952 --rc genhtml_function_coverage=1 00:04:42.952 --rc genhtml_legend=1 00:04:42.952 --rc geninfo_all_blocks=1 00:04:42.952 --rc geninfo_unexecuted_blocks=1 00:04:42.952 00:04:42.952 ' 00:04:42.952 05:06:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.952 --rc genhtml_branch_coverage=1 00:04:42.952 --rc genhtml_function_coverage=1 00:04:42.952 --rc genhtml_legend=1 00:04:42.952 --rc geninfo_all_blocks=1 00:04:42.952 --rc geninfo_unexecuted_blocks=1 00:04:42.952 00:04:42.952 ' 00:04:42.952 05:06:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.952 --rc genhtml_branch_coverage=1 00:04:42.952 --rc genhtml_function_coverage=1 00:04:42.952 --rc genhtml_legend=1 00:04:42.952 --rc geninfo_all_blocks=1 00:04:42.952 --rc geninfo_unexecuted_blocks=1 00:04:42.952 00:04:42.952 ' 00:04:42.952 05:06:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.952 --rc genhtml_branch_coverage=1 00:04:42.952 --rc genhtml_function_coverage=1 00:04:42.952 --rc genhtml_legend=1 00:04:42.952 --rc geninfo_all_blocks=1 00:04:42.952 --rc geninfo_unexecuted_blocks=1 00:04:42.952 00:04:42.952 ' 00:04:42.952 05:06:59 -- setup/driver.sh@68 -- # setup reset 00:04:42.952 05:06:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.952 05:06:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.217 05:07:04 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:48.217 05:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.217 05:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.217 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.217 ************************************ 00:04:48.217 START TEST guess_driver 00:04:48.217 ************************************ 00:04:48.217 05:07:04 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:48.217 05:07:04 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:48.217 05:07:04 -- setup/driver.sh@47 -- # local fail=0 00:04:48.217 05:07:04 -- setup/driver.sh@49 -- # pick_driver 00:04:48.217 05:07:04 -- setup/driver.sh@36 -- # vfio 00:04:48.217 05:07:04 -- setup/driver.sh@21 -- # local iommu_grups 00:04:48.217 05:07:04 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:48.217 05:07:04 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:48.217 05:07:04 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:48.217 05:07:04 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:48.217 05:07:04 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:48.217 05:07:04 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:48.217 05:07:04 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:48.217 05:07:04 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:48.217 05:07:04 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:48.217 05:07:04 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:48.217 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:48.217 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:48.217 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:48.217 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:48.217 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:48.217 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:48.217 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:48.217 05:07:04 -- setup/driver.sh@30 -- # return 0 00:04:48.217 05:07:04 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:48.217 05:07:04 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:48.217 05:07:04 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:48.217 05:07:04 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:48.217 Looking for driver=vfio-pci 00:04:48.217 05:07:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.217 05:07:04 -- setup/driver.sh@45 -- # setup output config 00:04:48.217 05:07:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.217 05:07:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.499 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.499 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.499 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.499 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.499 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.499 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.499 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.499 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.499 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.499 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.499 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.499 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.499 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.499 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.499 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.500 05:07:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.500 05:07:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.500 05:07:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:53.404 05:07:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:53.404 05:07:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:53.404 05:07:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:53.404 05:07:09 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:53.404 05:07:09 -- setup/driver.sh@65 -- # setup reset 00:04:53.404 05:07:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.404 05:07:09 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.676 00:04:58.676 real 0m10.057s 00:04:58.676 user 0m2.348s 00:04:58.676 sys 0m4.919s 00:04:58.676 05:07:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.676 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.676 ************************************ 00:04:58.676 END TEST guess_driver 00:04:58.676 ************************************ 00:04:58.676 00:04:58.676 real 0m14.933s 00:04:58.676 user 0m3.667s 00:04:58.676 sys 0m7.686s 00:04:58.676 05:07:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.676 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.676 ************************************ 00:04:58.676 END TEST driver 00:04:58.676 ************************************ 00:04:58.676 05:07:14 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:58.676 05:07:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.676 05:07:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.676 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.676 ************************************ 00:04:58.676 START TEST devices 00:04:58.676 ************************************ 00:04:58.676 05:07:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:58.676 * Looking for test storage... 00:04:58.676 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:58.676 05:07:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.676 05:07:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.676 05:07:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.676 05:07:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.676 05:07:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.676 05:07:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.676 05:07:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.676 05:07:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.676 05:07:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.676 05:07:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.676 05:07:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.676 05:07:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.676 05:07:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.676 05:07:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.676 05:07:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.676 05:07:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.676 05:07:14 -- scripts/common.sh@344 -- # : 1 00:04:58.676 05:07:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.676 05:07:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.676 05:07:14 -- scripts/common.sh@364 -- # decimal 1 00:04:58.676 05:07:14 -- scripts/common.sh@352 -- # local d=1 00:04:58.676 05:07:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.676 05:07:14 -- scripts/common.sh@354 -- # echo 1 00:04:58.676 05:07:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.676 05:07:14 -- scripts/common.sh@365 -- # decimal 2 00:04:58.676 05:07:14 -- scripts/common.sh@352 -- # local d=2 00:04:58.676 05:07:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.676 05:07:14 -- scripts/common.sh@354 -- # echo 2 00:04:58.676 05:07:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.676 05:07:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.676 05:07:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.676 05:07:14 -- scripts/common.sh@367 -- # return 0 00:04:58.676 05:07:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.676 05:07:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.676 --rc genhtml_branch_coverage=1 00:04:58.676 --rc genhtml_function_coverage=1 00:04:58.676 --rc genhtml_legend=1 00:04:58.676 --rc geninfo_all_blocks=1 00:04:58.676 --rc geninfo_unexecuted_blocks=1 00:04:58.676 00:04:58.676 ' 00:04:58.676 05:07:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.676 --rc genhtml_branch_coverage=1 00:04:58.676 --rc genhtml_function_coverage=1 00:04:58.676 --rc genhtml_legend=1 00:04:58.676 --rc geninfo_all_blocks=1 00:04:58.676 --rc geninfo_unexecuted_blocks=1 00:04:58.676 00:04:58.676 ' 00:04:58.676 05:07:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.676 --rc genhtml_branch_coverage=1 00:04:58.676 --rc genhtml_function_coverage=1 00:04:58.676 --rc genhtml_legend=1 00:04:58.676 --rc geninfo_all_blocks=1 00:04:58.676 --rc geninfo_unexecuted_blocks=1 00:04:58.676 00:04:58.676 ' 00:04:58.676 05:07:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.676 --rc genhtml_branch_coverage=1 00:04:58.676 --rc genhtml_function_coverage=1 00:04:58.676 --rc genhtml_legend=1 00:04:58.676 --rc geninfo_all_blocks=1 00:04:58.676 --rc geninfo_unexecuted_blocks=1 00:04:58.676 00:04:58.676 ' 00:04:58.676 05:07:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:58.676 05:07:14 -- setup/devices.sh@192 -- # setup reset 00:04:58.676 05:07:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.676 05:07:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.968 05:07:18 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:01.968 05:07:18 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:01.968 05:07:18 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:01.968 05:07:18 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:01.968 05:07:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:01.968 05:07:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:01.968 05:07:18 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:01.968 05:07:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.968 05:07:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:01.968 05:07:18 -- setup/devices.sh@196 -- # blocks=() 00:05:01.968 05:07:18 -- setup/devices.sh@196 -- # declare -a blocks 00:05:01.968 05:07:18 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:01.968 05:07:18 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:01.968 05:07:18 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:01.968 05:07:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:01.968 05:07:18 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:01.968 05:07:18 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:01.968 05:07:18 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:01.968 05:07:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:01.968 05:07:18 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:01.968 05:07:18 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:01.968 05:07:18 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:01.968 No valid GPT data, bailing 00:05:01.968 05:07:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.968 05:07:18 -- scripts/common.sh@393 -- # pt= 00:05:01.968 05:07:18 -- scripts/common.sh@394 -- # return 1 00:05:01.968 05:07:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:01.968 05:07:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:01.968 05:07:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:01.968 05:07:18 -- setup/common.sh@80 -- # echo 2000398934016 00:05:01.968 05:07:18 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:01.968 05:07:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:01.969 05:07:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:01.969 05:07:18 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:01.969 05:07:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:01.969 05:07:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:01.969 05:07:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.969 05:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.969 05:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:01.969 ************************************ 00:05:01.969 START TEST nvme_mount 00:05:01.969 ************************************ 00:05:01.969 05:07:18 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:01.969 05:07:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:01.969 05:07:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:01.969 05:07:18 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.969 05:07:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.969 05:07:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:01.969 05:07:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.969 05:07:18 -- setup/common.sh@40 -- # local part_no=1 00:05:01.969 05:07:18 -- setup/common.sh@41 -- # local size=1073741824 00:05:01.969 05:07:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.969 05:07:18 -- setup/common.sh@44 -- # parts=() 00:05:01.969 05:07:18 -- setup/common.sh@44 -- # local parts 00:05:01.969 05:07:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.969 05:07:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.969 05:07:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.969 05:07:18 -- setup/common.sh@46 -- # (( part++ )) 00:05:01.969 05:07:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.969 05:07:18 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:01.969 05:07:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.969 05:07:18 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:02.907 Creating new GPT entries in memory. 00:05:02.907 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.907 other utilities. 00:05:02.907 05:07:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.907 05:07:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.907 05:07:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.907 05:07:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.907 05:07:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:03.844 Creating new GPT entries in memory. 00:05:03.844 The operation has completed successfully. 00:05:03.844 05:07:20 -- setup/common.sh@57 -- # (( part++ )) 00:05:03.844 05:07:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.844 05:07:20 -- setup/common.sh@62 -- # wait 1621842 00:05:03.844 05:07:20 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.844 05:07:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:03.844 05:07:20 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.844 05:07:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:03.844 05:07:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:03.844 05:07:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.844 05:07:20 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.844 05:07:20 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:03.844 05:07:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:03.844 05:07:20 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.844 05:07:20 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.844 05:07:20 -- setup/devices.sh@53 -- # local found=0 00:05:03.844 05:07:20 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.844 05:07:20 -- setup/devices.sh@56 -- # : 00:05:03.844 05:07:20 -- setup/devices.sh@59 -- # local pci status 00:05:03.844 05:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.844 05:07:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:03.844 05:07:20 -- setup/devices.sh@47 -- # setup output config 00:05:03.844 05:07:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.844 05:07:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:07.134 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.134 05:07:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:07.134 05:07:23 -- setup/devices.sh@63 -- # found=1 00:05:07.134 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.134 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.134 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.134 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.134 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.134 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.134 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.135 05:07:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.135 05:07:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.393 05:07:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.393 05:07:23 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:07.394 05:07:23 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.394 05:07:23 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.394 05:07:23 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.394 05:07:23 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:07.394 05:07:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.394 05:07:23 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.394 05:07:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.394 05:07:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.394 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.394 05:07:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.394 05:07:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.652 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:07.652 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:07.652 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:07.652 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:07.652 05:07:24 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:07.652 05:07:24 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:07.652 05:07:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.652 05:07:24 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:07.652 05:07:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:07.652 05:07:24 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.652 05:07:24 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.652 05:07:24 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:07.652 05:07:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:07.652 05:07:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.652 05:07:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.652 05:07:24 -- setup/devices.sh@53 -- # local found=0 00:05:07.652 05:07:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.652 05:07:24 -- setup/devices.sh@56 -- # : 00:05:07.652 05:07:24 -- setup/devices.sh@59 -- # local pci status 00:05:07.652 05:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.652 05:07:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:07.652 05:07:24 -- setup/devices.sh@47 -- # setup output config 00:05:07.652 05:07:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.652 05:07:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:10.942 05:07:26 -- setup/devices.sh@63 -- # found=1 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.942 05:07:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.942 05:07:26 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:10.942 05:07:26 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.942 05:07:26 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.942 05:07:26 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.942 05:07:26 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.942 05:07:27 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:10.942 05:07:27 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:10.942 05:07:27 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:10.942 05:07:27 -- setup/devices.sh@50 -- # local mount_point= 00:05:10.942 05:07:27 -- setup/devices.sh@51 -- # local test_file= 00:05:10.942 05:07:27 -- setup/devices.sh@53 -- # local found=0 00:05:10.942 05:07:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.942 05:07:27 -- setup/devices.sh@59 -- # local pci status 00:05:10.942 05:07:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.942 05:07:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:10.942 05:07:27 -- setup/devices.sh@47 -- # setup output config 00:05:10.942 05:07:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.942 05:07:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:14.247 05:07:30 -- setup/devices.sh@63 -- # found=1 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.247 05:07:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.247 05:07:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:14.247 05:07:30 -- setup/devices.sh@68 -- # return 0 00:05:14.247 05:07:30 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:14.247 05:07:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.247 05:07:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.247 05:07:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.247 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.247 00:05:14.247 real 0m12.363s 00:05:14.247 user 0m3.423s 00:05:14.247 sys 0m6.714s 00:05:14.247 05:07:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.247 05:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:14.247 ************************************ 00:05:14.247 END TEST nvme_mount 00:05:14.247 ************************************ 00:05:14.247 05:07:30 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:14.247 05:07:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.247 05:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.247 05:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:14.247 ************************************ 00:05:14.247 START TEST dm_mount 00:05:14.247 ************************************ 00:05:14.247 05:07:30 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:14.247 05:07:30 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:14.247 05:07:30 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:14.247 05:07:30 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:14.247 05:07:30 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:14.247 05:07:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:14.247 05:07:30 -- setup/common.sh@40 -- # local part_no=2 00:05:14.247 05:07:30 -- setup/common.sh@41 -- # local size=1073741824 00:05:14.247 05:07:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:14.247 05:07:30 -- setup/common.sh@44 -- # parts=() 00:05:14.247 05:07:30 -- setup/common.sh@44 -- # local parts 00:05:14.247 05:07:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:14.247 05:07:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.247 05:07:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.247 05:07:30 -- setup/common.sh@46 -- # (( part++ )) 00:05:14.247 05:07:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.247 05:07:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.247 05:07:30 -- setup/common.sh@46 -- # (( part++ )) 00:05:14.247 05:07:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.247 05:07:30 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:14.247 05:07:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:14.247 05:07:30 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:15.187 Creating new GPT entries in memory. 00:05:15.187 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:15.187 other utilities. 00:05:15.187 05:07:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:15.187 05:07:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.187 05:07:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:15.187 05:07:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:15.187 05:07:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:16.127 Creating new GPT entries in memory. 00:05:16.127 The operation has completed successfully. 00:05:16.127 05:07:32 -- setup/common.sh@57 -- # (( part++ )) 00:05:16.127 05:07:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.127 05:07:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.127 05:07:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.127 05:07:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:17.508 The operation has completed successfully. 00:05:17.508 05:07:33 -- setup/common.sh@57 -- # (( part++ )) 00:05:17.508 05:07:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.508 05:07:33 -- setup/common.sh@62 -- # wait 1626352 00:05:17.508 05:07:33 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:17.508 05:07:33 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.508 05:07:33 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.508 05:07:33 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:17.508 05:07:33 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:17.508 05:07:33 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.508 05:07:33 -- setup/devices.sh@161 -- # break 00:05:17.508 05:07:33 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.508 05:07:33 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:17.508 05:07:33 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:17.508 05:07:33 -- setup/devices.sh@166 -- # dm=dm-2 00:05:17.508 05:07:33 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:17.508 05:07:33 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:17.508 05:07:33 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.508 05:07:33 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:17.508 05:07:33 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.508 05:07:33 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.508 05:07:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:17.508 05:07:33 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.508 05:07:33 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.508 05:07:33 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:17.508 05:07:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:17.508 05:07:33 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.508 05:07:33 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.508 05:07:33 -- setup/devices.sh@53 -- # local found=0 00:05:17.508 05:07:33 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:17.508 05:07:33 -- setup/devices.sh@56 -- # : 00:05:17.508 05:07:33 -- setup/devices.sh@59 -- # local pci status 00:05:17.508 05:07:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.508 05:07:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:17.508 05:07:33 -- setup/devices.sh@47 -- # setup output config 00:05:17.508 05:07:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.508 05:07:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:20.044 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.044 05:07:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:20.044 05:07:36 -- setup/devices.sh@63 -- # found=1 00:05:20.044 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:20.304 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.304 05:07:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.304 05:07:36 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:20.304 05:07:36 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:20.304 05:07:36 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.304 05:07:36 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.304 05:07:36 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:20.565 05:07:36 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:20.565 05:07:36 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:20.565 05:07:36 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:20.565 05:07:36 -- setup/devices.sh@50 -- # local mount_point= 00:05:20.565 05:07:36 -- setup/devices.sh@51 -- # local test_file= 00:05:20.565 05:07:36 -- setup/devices.sh@53 -- # local found=0 00:05:20.565 05:07:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.565 05:07:36 -- setup/devices.sh@59 -- # local pci status 00:05:20.565 05:07:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.565 05:07:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:20.565 05:07:36 -- setup/devices.sh@47 -- # setup output config 00:05:20.565 05:07:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.565 05:07:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:23.855 05:07:39 -- setup/devices.sh@63 -- # found=1 00:05:23.855 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.855 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.855 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.855 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.855 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.855 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.855 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.856 05:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.856 05:07:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.856 05:07:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.856 05:07:40 -- setup/devices.sh@68 -- # return 0 00:05:23.856 05:07:40 -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.856 05:07:40 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:23.856 05:07:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.856 05:07:40 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.856 05:07:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.856 05:07:40 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.856 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.856 05:07:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.856 05:07:40 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.856 00:05:23.856 real 0m9.593s 00:05:23.856 user 0m2.306s 00:05:23.856 sys 0m4.294s 00:05:23.856 05:07:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.856 05:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:23.856 ************************************ 00:05:23.856 END TEST dm_mount 00:05:23.856 ************************************ 00:05:23.856 05:07:40 -- setup/devices.sh@1 -- # cleanup 00:05:23.856 05:07:40 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:23.856 05:07:40 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.856 05:07:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.856 05:07:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.856 05:07:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.856 05:07:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.115 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:24.115 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:24.115 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:24.115 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:24.115 05:07:40 -- setup/devices.sh@12 -- # cleanup_dm 00:05:24.115 05:07:40 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:24.115 05:07:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:24.115 05:07:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.115 05:07:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:24.115 05:07:40 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.115 05:07:40 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:24.115 00:05:24.115 real 0m26.231s 00:05:24.115 user 0m7.223s 00:05:24.115 sys 0m13.710s 00:05:24.115 05:07:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.115 05:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.115 ************************************ 00:05:24.115 END TEST devices 00:05:24.115 ************************************ 00:05:24.115 00:05:24.115 real 1m34.134s 00:05:24.115 user 0m28.500s 00:05:24.115 sys 0m53.019s 00:05:24.115 05:07:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.115 05:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.115 ************************************ 00:05:24.115 END TEST setup.sh 00:05:24.115 ************************************ 00:05:24.115 05:07:40 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:27.452 Hugepages 00:05:27.452 node hugesize free / total 00:05:27.452 node0 1048576kB 0 / 0 00:05:27.452 node0 2048kB 2048 / 2048 00:05:27.452 node1 1048576kB 0 / 0 00:05:27.452 node1 2048kB 0 / 0 00:05:27.452 00:05:27.452 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:27.452 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:27.452 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:27.452 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:27.452 05:07:43 -- spdk/autotest.sh@128 -- # uname -s 00:05:27.452 05:07:43 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:27.452 05:07:43 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:27.452 05:07:43 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:30.791 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:30.791 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:32.698 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:32.698 05:07:48 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:33.637 05:07:49 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:33.637 05:07:49 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:33.637 05:07:49 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:33.637 05:07:49 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:33.637 05:07:49 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:33.637 05:07:49 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:33.637 05:07:49 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.637 05:07:49 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:33.637 05:07:49 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:33.637 05:07:50 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:33.637 05:07:50 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:33.638 05:07:50 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.930 Waiting for block devices as requested 00:05:36.930 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:36.930 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:36.930 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:37.190 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:37.190 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:37.190 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:37.450 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:37.450 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:37.450 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:37.709 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:37.709 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:37.709 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:37.969 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:37.969 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:37.969 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:38.228 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:38.228 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:38.488 05:07:54 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:38.488 05:07:54 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:05:38.488 05:07:54 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:38.488 05:07:54 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:38.488 05:07:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:38.488 05:07:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:38.488 05:07:54 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:05:38.488 05:07:54 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:38.488 05:07:54 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:38.488 05:07:54 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:38.488 05:07:54 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:38.488 05:07:54 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:38.488 05:07:54 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:38.488 05:07:54 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:38.488 05:07:54 -- common/autotest_common.sh@1552 -- # continue 00:05:38.488 05:07:54 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:38.488 05:07:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.488 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.488 05:07:54 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:38.488 05:07:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.488 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.488 05:07:54 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:41.778 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.778 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:44.315 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.315 05:08:00 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:44.315 05:08:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.315 05:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.315 05:08:00 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:44.315 05:08:00 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:44.315 05:08:00 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.315 05:08:00 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:44.315 05:08:00 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:44.315 05:08:00 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:44.315 05:08:00 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:44.315 05:08:00 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:44.315 05:08:00 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.315 05:08:00 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:44.315 05:08:00 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.315 05:08:00 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:44.315 05:08:00 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:44.315 05:08:00 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:44.315 05:08:00 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:44.315 05:08:00 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:44.315 05:08:00 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:44.315 05:08:00 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:44.315 05:08:00 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:44.315 05:08:00 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:44.315 05:08:00 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1636288 00:05:44.315 05:08:00 -- common/autotest_common.sh@1593 -- # waitforlisten 1636288 00:05:44.315 05:08:00 -- common/autotest_common.sh@829 -- # '[' -z 1636288 ']' 00:05:44.315 05:08:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.315 05:08:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.315 05:08:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.315 05:08:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.315 05:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.315 05:08:00 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.315 [2024-11-19 05:08:00.559649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.315 [2024-11-19 05:08:00.559698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636288 ] 00:05:44.315 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.315 [2024-11-19 05:08:00.630420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.315 [2024-11-19 05:08:00.667960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.315 [2024-11-19 05:08:00.668078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.882 05:08:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.882 05:08:01 -- common/autotest_common.sh@862 -- # return 0 00:05:44.882 05:08:01 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:44.882 05:08:01 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:44.882 05:08:01 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:48.172 nvme0n1 00:05:48.172 05:08:04 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:48.172 [2024-11-19 05:08:04.500174] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:48.172 request: 00:05:48.172 { 00:05:48.172 "nvme_ctrlr_name": "nvme0", 00:05:48.172 "password": "test", 00:05:48.172 "method": "bdev_nvme_opal_revert", 00:05:48.172 "req_id": 1 00:05:48.172 } 00:05:48.172 Got JSON-RPC error response 00:05:48.172 response: 00:05:48.172 { 00:05:48.172 "code": -32602, 00:05:48.172 "message": "Invalid parameters" 00:05:48.172 } 00:05:48.172 05:08:04 -- common/autotest_common.sh@1599 -- # true 00:05:48.172 05:08:04 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:48.172 05:08:04 -- common/autotest_common.sh@1603 -- # killprocess 1636288 00:05:48.172 05:08:04 -- common/autotest_common.sh@936 -- # '[' -z 1636288 ']' 00:05:48.172 05:08:04 -- common/autotest_common.sh@940 -- # kill -0 1636288 00:05:48.172 05:08:04 -- common/autotest_common.sh@941 -- # uname 00:05:48.172 05:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.172 05:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1636288 00:05:48.172 05:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.172 05:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.172 05:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1636288' 00:05:48.172 killing process with pid 1636288 00:05:48.172 05:08:04 -- common/autotest_common.sh@955 -- # kill 1636288 00:05:48.172 05:08:04 -- common/autotest_common.sh@960 -- # wait 1636288 00:05:50.712 05:08:07 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:50.712 05:08:07 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:50.712 05:08:07 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:50.712 05:08:07 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:50.712 05:08:07 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:50.712 05:08:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.712 05:08:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.712 05:08:07 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:50.712 05:08:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.712 05:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.712 05:08:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.712 ************************************ 00:05:50.712 START TEST env 00:05:50.712 ************************************ 00:05:50.712 05:08:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:50.712 * Looking for test storage... 00:05:50.712 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:50.712 05:08:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:50.712 05:08:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.712 05:08:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.972 05:08:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.972 05:08:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.972 05:08:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.972 05:08:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.972 05:08:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.972 05:08:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.972 05:08:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.972 05:08:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.972 05:08:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.972 05:08:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.972 05:08:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.972 05:08:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.972 05:08:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.972 05:08:07 -- scripts/common.sh@344 -- # : 1 00:05:50.972 05:08:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.972 05:08:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.972 05:08:07 -- scripts/common.sh@364 -- # decimal 1 00:05:50.972 05:08:07 -- scripts/common.sh@352 -- # local d=1 00:05:50.972 05:08:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.972 05:08:07 -- scripts/common.sh@354 -- # echo 1 00:05:50.972 05:08:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.972 05:08:07 -- scripts/common.sh@365 -- # decimal 2 00:05:50.972 05:08:07 -- scripts/common.sh@352 -- # local d=2 00:05:50.972 05:08:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.972 05:08:07 -- scripts/common.sh@354 -- # echo 2 00:05:50.972 05:08:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.972 05:08:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.972 05:08:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.972 05:08:07 -- scripts/common.sh@367 -- # return 0 00:05:50.973 05:08:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.973 05:08:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.973 --rc genhtml_branch_coverage=1 00:05:50.973 --rc genhtml_function_coverage=1 00:05:50.973 --rc genhtml_legend=1 00:05:50.973 --rc geninfo_all_blocks=1 00:05:50.973 --rc geninfo_unexecuted_blocks=1 00:05:50.973 00:05:50.973 ' 00:05:50.973 05:08:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.973 --rc genhtml_branch_coverage=1 00:05:50.973 --rc genhtml_function_coverage=1 00:05:50.973 --rc genhtml_legend=1 00:05:50.973 --rc geninfo_all_blocks=1 00:05:50.973 --rc geninfo_unexecuted_blocks=1 00:05:50.973 00:05:50.973 ' 00:05:50.973 05:08:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.973 --rc genhtml_branch_coverage=1 00:05:50.973 --rc genhtml_function_coverage=1 00:05:50.973 --rc genhtml_legend=1 00:05:50.973 --rc geninfo_all_blocks=1 00:05:50.973 --rc geninfo_unexecuted_blocks=1 00:05:50.973 00:05:50.973 ' 00:05:50.973 05:08:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.973 --rc genhtml_branch_coverage=1 00:05:50.973 --rc genhtml_function_coverage=1 00:05:50.973 --rc genhtml_legend=1 00:05:50.973 --rc geninfo_all_blocks=1 00:05:50.973 --rc geninfo_unexecuted_blocks=1 00:05:50.973 00:05:50.973 ' 00:05:50.973 05:08:07 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.973 05:08:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.973 05:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.973 05:08:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.973 ************************************ 00:05:50.973 START TEST env_memory 00:05:50.973 ************************************ 00:05:50.973 05:08:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.973 00:05:50.973 00:05:50.973 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.973 http://cunit.sourceforge.net/ 00:05:50.973 00:05:50.973 00:05:50.973 Suite: memory 00:05:50.973 Test: alloc and free memory map ...[2024-11-19 05:08:07.380561] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.973 passed 00:05:50.973 Test: mem map translation ...[2024-11-19 05:08:07.398751] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.973 [2024-11-19 05:08:07.398776] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.973 [2024-11-19 05:08:07.398809] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.973 [2024-11-19 05:08:07.398817] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.973 passed 00:05:50.973 Test: mem map registration ...[2024-11-19 05:08:07.434220] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:50.973 [2024-11-19 05:08:07.434238] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:50.973 passed 00:05:50.973 Test: mem map adjacent registrations ...passed 00:05:50.973 00:05:50.973 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.973 suites 1 1 n/a 0 0 00:05:50.973 tests 4 4 4 0 0 00:05:50.973 asserts 152 152 152 0 n/a 00:05:50.973 00:05:50.973 Elapsed time = 0.134 seconds 00:05:50.973 00:05:50.973 real 0m0.148s 00:05:50.973 user 0m0.134s 00:05:50.973 sys 0m0.013s 00:05:50.973 05:08:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.973 05:08:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.973 ************************************ 00:05:50.973 END TEST env_memory 00:05:50.973 ************************************ 00:05:50.973 05:08:07 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.973 05:08:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.973 05:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.973 05:08:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.973 ************************************ 00:05:50.973 START TEST env_vtophys 00:05:50.973 ************************************ 00:05:50.973 05:08:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:51.233 EAL: lib.eal log level changed from notice to debug 00:05:51.233 EAL: Detected lcore 0 as core 0 on socket 0 00:05:51.233 EAL: Detected lcore 1 as core 1 on socket 0 00:05:51.233 EAL: Detected lcore 2 as core 2 on socket 0 00:05:51.233 EAL: Detected lcore 3 as core 3 on socket 0 00:05:51.233 EAL: Detected lcore 4 as core 4 on socket 0 00:05:51.233 EAL: Detected lcore 5 as core 5 on socket 0 00:05:51.233 EAL: Detected lcore 6 as core 6 on socket 0 00:05:51.233 EAL: Detected lcore 7 as core 8 on socket 0 00:05:51.233 EAL: Detected lcore 8 as core 9 on socket 0 00:05:51.233 EAL: Detected lcore 9 as core 10 on socket 0 00:05:51.233 EAL: Detected lcore 10 as core 11 on socket 0 00:05:51.233 EAL: Detected lcore 11 as core 12 on socket 0 00:05:51.233 EAL: Detected lcore 12 as core 13 on socket 0 00:05:51.233 EAL: Detected lcore 13 as core 14 on socket 0 00:05:51.233 EAL: Detected lcore 14 as core 16 on socket 0 00:05:51.233 EAL: Detected lcore 15 as core 17 on socket 0 00:05:51.233 EAL: Detected lcore 16 as core 18 on socket 0 00:05:51.233 EAL: Detected lcore 17 as core 19 on socket 0 00:05:51.233 EAL: Detected lcore 18 as core 20 on socket 0 00:05:51.233 EAL: Detected lcore 19 as core 21 on socket 0 00:05:51.233 EAL: Detected lcore 20 as core 22 on socket 0 00:05:51.233 EAL: Detected lcore 21 as core 24 on socket 0 00:05:51.233 EAL: Detected lcore 22 as core 25 on socket 0 00:05:51.233 EAL: Detected lcore 23 as core 26 on socket 0 00:05:51.233 EAL: Detected lcore 24 as core 27 on socket 0 00:05:51.233 EAL: Detected lcore 25 as core 28 on socket 0 00:05:51.233 EAL: Detected lcore 26 as core 29 on socket 0 00:05:51.233 EAL: Detected lcore 27 as core 30 on socket 0 00:05:51.233 EAL: Detected lcore 28 as core 0 on socket 1 00:05:51.233 EAL: Detected lcore 29 as core 1 on socket 1 00:05:51.233 EAL: Detected lcore 30 as core 2 on socket 1 00:05:51.233 EAL: Detected lcore 31 as core 3 on socket 1 00:05:51.233 EAL: Detected lcore 32 as core 4 on socket 1 00:05:51.233 EAL: Detected lcore 33 as core 5 on socket 1 00:05:51.233 EAL: Detected lcore 34 as core 6 on socket 1 00:05:51.233 EAL: Detected lcore 35 as core 8 on socket 1 00:05:51.233 EAL: Detected lcore 36 as core 9 on socket 1 00:05:51.233 EAL: Detected lcore 37 as core 10 on socket 1 00:05:51.233 EAL: Detected lcore 38 as core 11 on socket 1 00:05:51.233 EAL: Detected lcore 39 as core 12 on socket 1 00:05:51.233 EAL: Detected lcore 40 as core 13 on socket 1 00:05:51.233 EAL: Detected lcore 41 as core 14 on socket 1 00:05:51.233 EAL: Detected lcore 42 as core 16 on socket 1 00:05:51.233 EAL: Detected lcore 43 as core 17 on socket 1 00:05:51.233 EAL: Detected lcore 44 as core 18 on socket 1 00:05:51.233 EAL: Detected lcore 45 as core 19 on socket 1 00:05:51.233 EAL: Detected lcore 46 as core 20 on socket 1 00:05:51.233 EAL: Detected lcore 47 as core 21 on socket 1 00:05:51.233 EAL: Detected lcore 48 as core 22 on socket 1 00:05:51.233 EAL: Detected lcore 49 as core 24 on socket 1 00:05:51.233 EAL: Detected lcore 50 as core 25 on socket 1 00:05:51.233 EAL: Detected lcore 51 as core 26 on socket 1 00:05:51.233 EAL: Detected lcore 52 as core 27 on socket 1 00:05:51.233 EAL: Detected lcore 53 as core 28 on socket 1 00:05:51.233 EAL: Detected lcore 54 as core 29 on socket 1 00:05:51.233 EAL: Detected lcore 55 as core 30 on socket 1 00:05:51.233 EAL: Detected lcore 56 as core 0 on socket 0 00:05:51.233 EAL: Detected lcore 57 as core 1 on socket 0 00:05:51.233 EAL: Detected lcore 58 as core 2 on socket 0 00:05:51.233 EAL: Detected lcore 59 as core 3 on socket 0 00:05:51.233 EAL: Detected lcore 60 as core 4 on socket 0 00:05:51.233 EAL: Detected lcore 61 as core 5 on socket 0 00:05:51.233 EAL: Detected lcore 62 as core 6 on socket 0 00:05:51.233 EAL: Detected lcore 63 as core 8 on socket 0 00:05:51.233 EAL: Detected lcore 64 as core 9 on socket 0 00:05:51.233 EAL: Detected lcore 65 as core 10 on socket 0 00:05:51.233 EAL: Detected lcore 66 as core 11 on socket 0 00:05:51.233 EAL: Detected lcore 67 as core 12 on socket 0 00:05:51.233 EAL: Detected lcore 68 as core 13 on socket 0 00:05:51.233 EAL: Detected lcore 69 as core 14 on socket 0 00:05:51.233 EAL: Detected lcore 70 as core 16 on socket 0 00:05:51.233 EAL: Detected lcore 71 as core 17 on socket 0 00:05:51.233 EAL: Detected lcore 72 as core 18 on socket 0 00:05:51.233 EAL: Detected lcore 73 as core 19 on socket 0 00:05:51.233 EAL: Detected lcore 74 as core 20 on socket 0 00:05:51.233 EAL: Detected lcore 75 as core 21 on socket 0 00:05:51.233 EAL: Detected lcore 76 as core 22 on socket 0 00:05:51.233 EAL: Detected lcore 77 as core 24 on socket 0 00:05:51.233 EAL: Detected lcore 78 as core 25 on socket 0 00:05:51.233 EAL: Detected lcore 79 as core 26 on socket 0 00:05:51.233 EAL: Detected lcore 80 as core 27 on socket 0 00:05:51.233 EAL: Detected lcore 81 as core 28 on socket 0 00:05:51.234 EAL: Detected lcore 82 as core 29 on socket 0 00:05:51.234 EAL: Detected lcore 83 as core 30 on socket 0 00:05:51.234 EAL: Detected lcore 84 as core 0 on socket 1 00:05:51.234 EAL: Detected lcore 85 as core 1 on socket 1 00:05:51.234 EAL: Detected lcore 86 as core 2 on socket 1 00:05:51.234 EAL: Detected lcore 87 as core 3 on socket 1 00:05:51.234 EAL: Detected lcore 88 as core 4 on socket 1 00:05:51.234 EAL: Detected lcore 89 as core 5 on socket 1 00:05:51.234 EAL: Detected lcore 90 as core 6 on socket 1 00:05:51.234 EAL: Detected lcore 91 as core 8 on socket 1 00:05:51.234 EAL: Detected lcore 92 as core 9 on socket 1 00:05:51.234 EAL: Detected lcore 93 as core 10 on socket 1 00:05:51.234 EAL: Detected lcore 94 as core 11 on socket 1 00:05:51.234 EAL: Detected lcore 95 as core 12 on socket 1 00:05:51.234 EAL: Detected lcore 96 as core 13 on socket 1 00:05:51.234 EAL: Detected lcore 97 as core 14 on socket 1 00:05:51.234 EAL: Detected lcore 98 as core 16 on socket 1 00:05:51.234 EAL: Detected lcore 99 as core 17 on socket 1 00:05:51.234 EAL: Detected lcore 100 as core 18 on socket 1 00:05:51.234 EAL: Detected lcore 101 as core 19 on socket 1 00:05:51.234 EAL: Detected lcore 102 as core 20 on socket 1 00:05:51.234 EAL: Detected lcore 103 as core 21 on socket 1 00:05:51.234 EAL: Detected lcore 104 as core 22 on socket 1 00:05:51.234 EAL: Detected lcore 105 as core 24 on socket 1 00:05:51.234 EAL: Detected lcore 106 as core 25 on socket 1 00:05:51.234 EAL: Detected lcore 107 as core 26 on socket 1 00:05:51.234 EAL: Detected lcore 108 as core 27 on socket 1 00:05:51.234 EAL: Detected lcore 109 as core 28 on socket 1 00:05:51.234 EAL: Detected lcore 110 as core 29 on socket 1 00:05:51.234 EAL: Detected lcore 111 as core 30 on socket 1 00:05:51.234 EAL: Maximum logical cores by configuration: 128 00:05:51.234 EAL: Detected CPU lcores: 112 00:05:51.234 EAL: Detected NUMA nodes: 2 00:05:51.234 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:51.234 EAL: Detected shared linkage of DPDK 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:51.234 EAL: Registered [vdev] bus. 00:05:51.234 EAL: bus.vdev log level changed from disabled to notice 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:51.234 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:51.234 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:51.234 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:51.234 EAL: No shared files mode enabled, IPC will be disabled 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: Bus pci wants IOVA as 'DC' 00:05:51.234 EAL: Bus vdev wants IOVA as 'DC' 00:05:51.234 EAL: Buses did not request a specific IOVA mode. 00:05:51.234 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:51.234 EAL: Selected IOVA mode 'VA' 00:05:51.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.234 EAL: Probing VFIO support... 00:05:51.234 EAL: IOMMU type 1 (Type 1) is supported 00:05:51.234 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:51.234 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:51.234 EAL: VFIO support initialized 00:05:51.234 EAL: Ask a virtual area of 0x2e000 bytes 00:05:51.234 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:51.234 EAL: Setting up physically contiguous memory... 00:05:51.234 EAL: Setting maximum number of open files to 524288 00:05:51.234 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:51.234 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:51.234 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:51.234 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:51.234 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.234 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:51.234 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.234 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.234 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:51.234 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:51.234 EAL: Hugepages will be freed exactly as allocated. 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: TSC frequency is ~2500000 KHz 00:05:51.234 EAL: Main lcore 0 is ready (tid=7fcda316ca00;cpuset=[0]) 00:05:51.234 EAL: Trying to obtain current memory policy. 00:05:51.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.234 EAL: Restoring previous memory policy: 0 00:05:51.234 EAL: request: mp_malloc_sync 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: Heap on socket 0 was expanded by 2MB 00:05:51.234 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:51.234 EAL: probe driver: 8086:37d2 net_i40e 00:05:51.234 EAL: Not managed by a supported kernel driver, skipped 00:05:51.234 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:51.234 EAL: probe driver: 8086:37d2 net_i40e 00:05:51.234 EAL: Not managed by a supported kernel driver, skipped 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:51.234 EAL: Mem event callback 'spdk:(nil)' registered 00:05:51.234 00:05:51.234 00:05:51.234 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.234 http://cunit.sourceforge.net/ 00:05:51.234 00:05:51.234 00:05:51.234 Suite: components_suite 00:05:51.234 Test: vtophys_malloc_test ...passed 00:05:51.234 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:51.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.234 EAL: Restoring previous memory policy: 4 00:05:51.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.234 EAL: request: mp_malloc_sync 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: Heap on socket 0 was expanded by 4MB 00:05:51.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.234 EAL: request: mp_malloc_sync 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: Heap on socket 0 was shrunk by 4MB 00:05:51.234 EAL: Trying to obtain current memory policy. 00:05:51.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.234 EAL: Restoring previous memory policy: 4 00:05:51.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.234 EAL: request: mp_malloc_sync 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: Heap on socket 0 was expanded by 6MB 00:05:51.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.234 EAL: request: mp_malloc_sync 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: Heap on socket 0 was shrunk by 6MB 00:05:51.234 EAL: Trying to obtain current memory policy. 00:05:51.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.234 EAL: Restoring previous memory policy: 4 00:05:51.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.234 EAL: request: mp_malloc_sync 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.234 EAL: Heap on socket 0 was expanded by 10MB 00:05:51.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.234 EAL: request: mp_malloc_sync 00:05:51.234 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was shrunk by 10MB 00:05:51.235 EAL: Trying to obtain current memory policy. 00:05:51.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.235 EAL: Restoring previous memory policy: 4 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was expanded by 18MB 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was shrunk by 18MB 00:05:51.235 EAL: Trying to obtain current memory policy. 00:05:51.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.235 EAL: Restoring previous memory policy: 4 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was expanded by 34MB 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was shrunk by 34MB 00:05:51.235 EAL: Trying to obtain current memory policy. 00:05:51.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.235 EAL: Restoring previous memory policy: 4 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was expanded by 66MB 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was shrunk by 66MB 00:05:51.235 EAL: Trying to obtain current memory policy. 00:05:51.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.235 EAL: Restoring previous memory policy: 4 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was expanded by 130MB 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was shrunk by 130MB 00:05:51.235 EAL: Trying to obtain current memory policy. 00:05:51.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.235 EAL: Restoring previous memory policy: 4 00:05:51.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.235 EAL: request: mp_malloc_sync 00:05:51.235 EAL: No shared files mode enabled, IPC is disabled 00:05:51.235 EAL: Heap on socket 0 was expanded by 258MB 00:05:51.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.494 EAL: request: mp_malloc_sync 00:05:51.494 EAL: No shared files mode enabled, IPC is disabled 00:05:51.494 EAL: Heap on socket 0 was shrunk by 258MB 00:05:51.494 EAL: Trying to obtain current memory policy. 00:05:51.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.494 EAL: Restoring previous memory policy: 4 00:05:51.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.494 EAL: request: mp_malloc_sync 00:05:51.494 EAL: No shared files mode enabled, IPC is disabled 00:05:51.494 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.753 EAL: request: mp_malloc_sync 00:05:51.753 EAL: No shared files mode enabled, IPC is disabled 00:05:51.753 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.753 EAL: Trying to obtain current memory policy. 00:05:51.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.753 EAL: Restoring previous memory policy: 4 00:05:51.753 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.753 EAL: request: mp_malloc_sync 00:05:51.753 EAL: No shared files mode enabled, IPC is disabled 00:05:51.753 EAL: Heap on socket 0 was expanded by 1026MB 00:05:52.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.273 EAL: request: mp_malloc_sync 00:05:52.273 EAL: No shared files mode enabled, IPC is disabled 00:05:52.273 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.273 passed 00:05:52.273 00:05:52.273 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.273 suites 1 1 n/a 0 0 00:05:52.273 tests 2 2 2 0 0 00:05:52.273 asserts 497 497 497 0 n/a 00:05:52.273 00:05:52.273 Elapsed time = 0.963 seconds 00:05:52.273 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.273 EAL: request: mp_malloc_sync 00:05:52.273 EAL: No shared files mode enabled, IPC is disabled 00:05:52.273 EAL: Heap on socket 0 was shrunk by 2MB 00:05:52.273 EAL: No shared files mode enabled, IPC is disabled 00:05:52.273 EAL: No shared files mode enabled, IPC is disabled 00:05:52.273 EAL: No shared files mode enabled, IPC is disabled 00:05:52.273 00:05:52.273 real 0m1.094s 00:05:52.273 user 0m0.629s 00:05:52.273 sys 0m0.433s 00:05:52.273 05:08:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.273 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.273 ************************************ 00:05:52.273 END TEST env_vtophys 00:05:52.273 ************************************ 00:05:52.273 05:08:08 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.273 05:08:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.273 05:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.273 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.273 ************************************ 00:05:52.273 START TEST env_pci 00:05:52.273 ************************************ 00:05:52.273 05:08:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.273 00:05:52.273 00:05:52.273 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.273 http://cunit.sourceforge.net/ 00:05:52.273 00:05:52.273 00:05:52.273 Suite: pci 00:05:52.273 Test: pci_hook ...[2024-11-19 05:08:08.694013] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1637867 has claimed it 00:05:52.273 EAL: Cannot find device (10000:00:01.0) 00:05:52.273 EAL: Failed to attach device on primary process 00:05:52.273 passed 00:05:52.273 00:05:52.273 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.273 suites 1 1 n/a 0 0 00:05:52.273 tests 1 1 1 0 0 00:05:52.273 asserts 25 25 25 0 n/a 00:05:52.273 00:05:52.273 Elapsed time = 0.034 seconds 00:05:52.273 00:05:52.273 real 0m0.055s 00:05:52.273 user 0m0.019s 00:05:52.273 sys 0m0.035s 00:05:52.273 05:08:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.273 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.273 ************************************ 00:05:52.273 END TEST env_pci 00:05:52.273 ************************************ 00:05:52.273 05:08:08 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.273 05:08:08 -- env/env.sh@15 -- # uname 00:05:52.273 05:08:08 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.273 05:08:08 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.273 05:08:08 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.273 05:08:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:52.273 05:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.273 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.273 ************************************ 00:05:52.273 START TEST env_dpdk_post_init 00:05:52.273 ************************************ 00:05:52.273 05:08:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.273 EAL: Detected CPU lcores: 112 00:05:52.273 EAL: Detected NUMA nodes: 2 00:05:52.273 EAL: Detected shared linkage of DPDK 00:05:52.273 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.533 EAL: Selected IOVA mode 'VA' 00:05:52.533 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.533 EAL: VFIO support initialized 00:05:52.533 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.533 EAL: Using IOMMU type 1 (Type 1) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:52.533 EAL: Ignore mapping IO port bar(1) 00:05:52.533 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:52.793 EAL: Ignore mapping IO port bar(1) 00:05:52.793 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:52.793 EAL: Ignore mapping IO port bar(1) 00:05:52.793 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:53.361 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:57.555 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:57.555 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:57.815 Starting DPDK initialization... 00:05:57.815 Starting SPDK post initialization... 00:05:57.815 SPDK NVMe probe 00:05:57.815 Attaching to 0000:d8:00.0 00:05:57.815 Attached to 0000:d8:00.0 00:05:57.815 Cleaning up... 00:05:57.815 00:05:57.815 real 0m5.348s 00:05:57.815 user 0m3.985s 00:05:57.815 sys 0m0.421s 00:05:57.815 05:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.815 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:57.815 ************************************ 00:05:57.815 END TEST env_dpdk_post_init 00:05:57.815 ************************************ 00:05:57.815 05:08:14 -- env/env.sh@26 -- # uname 00:05:57.815 05:08:14 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:57.815 05:08:14 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.815 05:08:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.815 05:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.815 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:57.815 ************************************ 00:05:57.815 START TEST env_mem_callbacks 00:05:57.815 ************************************ 00:05:57.815 05:08:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.815 EAL: Detected CPU lcores: 112 00:05:57.815 EAL: Detected NUMA nodes: 2 00:05:57.815 EAL: Detected shared linkage of DPDK 00:05:57.815 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.815 EAL: Selected IOVA mode 'VA' 00:05:57.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.815 EAL: VFIO support initialized 00:05:57.815 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.815 00:05:57.815 00:05:57.815 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.815 http://cunit.sourceforge.net/ 00:05:57.815 00:05:57.815 00:05:57.815 Suite: memory 00:05:57.815 Test: test ... 00:05:57.815 register 0x200000200000 2097152 00:05:57.815 malloc 3145728 00:05:57.815 register 0x200000400000 4194304 00:05:57.815 buf 0x200000500000 len 3145728 PASSED 00:05:57.815 malloc 64 00:05:57.815 buf 0x2000004fff40 len 64 PASSED 00:05:57.815 malloc 4194304 00:05:57.815 register 0x200000800000 6291456 00:05:57.815 buf 0x200000a00000 len 4194304 PASSED 00:05:57.815 free 0x200000500000 3145728 00:05:57.815 free 0x2000004fff40 64 00:05:57.815 unregister 0x200000400000 4194304 PASSED 00:05:57.815 free 0x200000a00000 4194304 00:05:57.815 unregister 0x200000800000 6291456 PASSED 00:05:57.815 malloc 8388608 00:05:57.815 register 0x200000400000 10485760 00:05:57.815 buf 0x200000600000 len 8388608 PASSED 00:05:57.815 free 0x200000600000 8388608 00:05:57.815 unregister 0x200000400000 10485760 PASSED 00:05:57.815 passed 00:05:57.815 00:05:57.815 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.815 suites 1 1 n/a 0 0 00:05:57.815 tests 1 1 1 0 0 00:05:57.815 asserts 15 15 15 0 n/a 00:05:57.815 00:05:57.815 Elapsed time = 0.005 seconds 00:05:57.815 00:05:57.815 real 0m0.065s 00:05:57.815 user 0m0.018s 00:05:57.815 sys 0m0.047s 00:05:57.815 05:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.815 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:57.815 ************************************ 00:05:57.815 END TEST env_mem_callbacks 00:05:57.815 ************************************ 00:05:57.815 00:05:57.815 real 0m7.155s 00:05:57.815 user 0m4.960s 00:05:57.815 sys 0m1.279s 00:05:57.815 05:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.815 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:57.815 ************************************ 00:05:57.815 END TEST env 00:05:57.815 ************************************ 00:05:57.815 05:08:14 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:57.815 05:08:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.815 05:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.815 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:57.815 ************************************ 00:05:57.815 START TEST rpc 00:05:57.815 ************************************ 00:05:57.815 05:08:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:58.075 * Looking for test storage... 00:05:58.075 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:58.075 05:08:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.075 05:08:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.075 05:08:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.075 05:08:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.075 05:08:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.075 05:08:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.075 05:08:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.076 05:08:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.076 05:08:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.076 05:08:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.076 05:08:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.076 05:08:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.076 05:08:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.076 05:08:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.076 05:08:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.076 05:08:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.076 05:08:14 -- scripts/common.sh@344 -- # : 1 00:05:58.076 05:08:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.076 05:08:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.076 05:08:14 -- scripts/common.sh@364 -- # decimal 1 00:05:58.076 05:08:14 -- scripts/common.sh@352 -- # local d=1 00:05:58.076 05:08:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.076 05:08:14 -- scripts/common.sh@354 -- # echo 1 00:05:58.076 05:08:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.076 05:08:14 -- scripts/common.sh@365 -- # decimal 2 00:05:58.076 05:08:14 -- scripts/common.sh@352 -- # local d=2 00:05:58.076 05:08:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.076 05:08:14 -- scripts/common.sh@354 -- # echo 2 00:05:58.076 05:08:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.076 05:08:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.076 05:08:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.076 05:08:14 -- scripts/common.sh@367 -- # return 0 00:05:58.076 05:08:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.076 05:08:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.076 --rc genhtml_branch_coverage=1 00:05:58.076 --rc genhtml_function_coverage=1 00:05:58.076 --rc genhtml_legend=1 00:05:58.076 --rc geninfo_all_blocks=1 00:05:58.076 --rc geninfo_unexecuted_blocks=1 00:05:58.076 00:05:58.076 ' 00:05:58.076 05:08:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.076 --rc genhtml_branch_coverage=1 00:05:58.076 --rc genhtml_function_coverage=1 00:05:58.076 --rc genhtml_legend=1 00:05:58.076 --rc geninfo_all_blocks=1 00:05:58.076 --rc geninfo_unexecuted_blocks=1 00:05:58.076 00:05:58.076 ' 00:05:58.076 05:08:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.076 --rc genhtml_branch_coverage=1 00:05:58.076 --rc genhtml_function_coverage=1 00:05:58.076 --rc genhtml_legend=1 00:05:58.076 --rc geninfo_all_blocks=1 00:05:58.076 --rc geninfo_unexecuted_blocks=1 00:05:58.076 00:05:58.076 ' 00:05:58.076 05:08:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.076 --rc genhtml_branch_coverage=1 00:05:58.076 --rc genhtml_function_coverage=1 00:05:58.076 --rc genhtml_legend=1 00:05:58.076 --rc geninfo_all_blocks=1 00:05:58.076 --rc geninfo_unexecuted_blocks=1 00:05:58.076 00:05:58.076 ' 00:05:58.076 05:08:14 -- rpc/rpc.sh@65 -- # spdk_pid=1639019 00:05:58.076 05:08:14 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:58.076 05:08:14 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.076 05:08:14 -- rpc/rpc.sh@67 -- # waitforlisten 1639019 00:05:58.076 05:08:14 -- common/autotest_common.sh@829 -- # '[' -z 1639019 ']' 00:05:58.076 05:08:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.076 05:08:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.076 05:08:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.076 05:08:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.076 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:58.076 [2024-11-19 05:08:14.589037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.076 [2024-11-19 05:08:14.589092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639019 ] 00:05:58.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.336 [2024-11-19 05:08:14.660950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.336 [2024-11-19 05:08:14.697370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.336 [2024-11-19 05:08:14.697482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:58.336 [2024-11-19 05:08:14.697492] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1639019' to capture a snapshot of events at runtime. 00:05:58.336 [2024-11-19 05:08:14.697501] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1639019 for offline analysis/debug. 00:05:58.336 [2024-11-19 05:08:14.697522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.905 05:08:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.905 05:08:15 -- common/autotest_common.sh@862 -- # return 0 00:05:58.905 05:08:15 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:58.905 05:08:15 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:58.905 05:08:15 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:58.905 05:08:15 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:58.905 05:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.905 05:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.905 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:58.905 ************************************ 00:05:58.905 START TEST rpc_integrity 00:05:58.905 ************************************ 00:05:58.905 05:08:15 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:58.905 05:08:15 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.905 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.905 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:58.905 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.905 05:08:15 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.905 05:08:15 -- rpc/rpc.sh@13 -- # jq length 00:05:58.905 05:08:15 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.905 05:08:15 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.905 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.905 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:58.905 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.905 05:08:15 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:58.905 05:08:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.905 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.905 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.165 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.165 05:08:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:59.165 { 00:05:59.165 "name": "Malloc0", 00:05:59.165 "aliases": [ 00:05:59.165 "cb24ec5f-727e-453a-8270-d4a6d075c4fc" 00:05:59.165 ], 00:05:59.165 "product_name": "Malloc disk", 00:05:59.165 "block_size": 512, 00:05:59.165 "num_blocks": 16384, 00:05:59.165 "uuid": "cb24ec5f-727e-453a-8270-d4a6d075c4fc", 00:05:59.165 "assigned_rate_limits": { 00:05:59.165 "rw_ios_per_sec": 0, 00:05:59.165 "rw_mbytes_per_sec": 0, 00:05:59.165 "r_mbytes_per_sec": 0, 00:05:59.165 "w_mbytes_per_sec": 0 00:05:59.165 }, 00:05:59.165 "claimed": false, 00:05:59.165 "zoned": false, 00:05:59.165 "supported_io_types": { 00:05:59.165 "read": true, 00:05:59.165 "write": true, 00:05:59.165 "unmap": true, 00:05:59.165 "write_zeroes": true, 00:05:59.165 "flush": true, 00:05:59.165 "reset": true, 00:05:59.165 "compare": false, 00:05:59.165 "compare_and_write": false, 00:05:59.165 "abort": true, 00:05:59.165 "nvme_admin": false, 00:05:59.165 "nvme_io": false 00:05:59.165 }, 00:05:59.165 "memory_domains": [ 00:05:59.165 { 00:05:59.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.165 "dma_device_type": 2 00:05:59.165 } 00:05:59.165 ], 00:05:59.165 "driver_specific": {} 00:05:59.165 } 00:05:59.165 ]' 00:05:59.165 05:08:15 -- rpc/rpc.sh@17 -- # jq length 00:05:59.165 05:08:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:59.165 05:08:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:59.166 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 [2024-11-19 05:08:15.516985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:59.166 [2024-11-19 05:08:15.517019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:59.166 [2024-11-19 05:08:15.517033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb4890 00:05:59.166 [2024-11-19 05:08:15.517042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:59.166 [2024-11-19 05:08:15.518049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:59.166 [2024-11-19 05:08:15.518073] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:59.166 Passthru0 00:05:59.166 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.166 05:08:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:59.166 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.166 05:08:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:59.166 { 00:05:59.166 "name": "Malloc0", 00:05:59.166 "aliases": [ 00:05:59.166 "cb24ec5f-727e-453a-8270-d4a6d075c4fc" 00:05:59.166 ], 00:05:59.166 "product_name": "Malloc disk", 00:05:59.166 "block_size": 512, 00:05:59.166 "num_blocks": 16384, 00:05:59.166 "uuid": "cb24ec5f-727e-453a-8270-d4a6d075c4fc", 00:05:59.166 "assigned_rate_limits": { 00:05:59.166 "rw_ios_per_sec": 0, 00:05:59.166 "rw_mbytes_per_sec": 0, 00:05:59.166 "r_mbytes_per_sec": 0, 00:05:59.166 "w_mbytes_per_sec": 0 00:05:59.166 }, 00:05:59.166 "claimed": true, 00:05:59.166 "claim_type": "exclusive_write", 00:05:59.166 "zoned": false, 00:05:59.166 "supported_io_types": { 00:05:59.166 "read": true, 00:05:59.166 "write": true, 00:05:59.166 "unmap": true, 00:05:59.166 "write_zeroes": true, 00:05:59.166 "flush": true, 00:05:59.166 "reset": true, 00:05:59.166 "compare": false, 00:05:59.166 "compare_and_write": false, 00:05:59.166 "abort": true, 00:05:59.166 "nvme_admin": false, 00:05:59.166 "nvme_io": false 00:05:59.166 }, 00:05:59.166 "memory_domains": [ 00:05:59.166 { 00:05:59.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.166 "dma_device_type": 2 00:05:59.166 } 00:05:59.166 ], 00:05:59.166 "driver_specific": {} 00:05:59.166 }, 00:05:59.166 { 00:05:59.166 "name": "Passthru0", 00:05:59.166 "aliases": [ 00:05:59.166 "7234f707-c12f-5a78-8170-19db89331898" 00:05:59.166 ], 00:05:59.166 "product_name": "passthru", 00:05:59.166 "block_size": 512, 00:05:59.166 "num_blocks": 16384, 00:05:59.166 "uuid": "7234f707-c12f-5a78-8170-19db89331898", 00:05:59.166 "assigned_rate_limits": { 00:05:59.166 "rw_ios_per_sec": 0, 00:05:59.166 "rw_mbytes_per_sec": 0, 00:05:59.166 "r_mbytes_per_sec": 0, 00:05:59.166 "w_mbytes_per_sec": 0 00:05:59.166 }, 00:05:59.166 "claimed": false, 00:05:59.166 "zoned": false, 00:05:59.166 "supported_io_types": { 00:05:59.166 "read": true, 00:05:59.166 "write": true, 00:05:59.166 "unmap": true, 00:05:59.166 "write_zeroes": true, 00:05:59.166 "flush": true, 00:05:59.166 "reset": true, 00:05:59.166 "compare": false, 00:05:59.166 "compare_and_write": false, 00:05:59.166 "abort": true, 00:05:59.166 "nvme_admin": false, 00:05:59.166 "nvme_io": false 00:05:59.166 }, 00:05:59.166 "memory_domains": [ 00:05:59.166 { 00:05:59.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.166 "dma_device_type": 2 00:05:59.166 } 00:05:59.166 ], 00:05:59.166 "driver_specific": { 00:05:59.166 "passthru": { 00:05:59.166 "name": "Passthru0", 00:05:59.166 "base_bdev_name": "Malloc0" 00:05:59.166 } 00:05:59.166 } 00:05:59.166 } 00:05:59.166 ]' 00:05:59.166 05:08:15 -- rpc/rpc.sh@21 -- # jq length 00:05:59.166 05:08:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:59.166 05:08:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:59.166 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.166 05:08:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:59.166 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.166 05:08:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:59.166 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.166 05:08:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:59.166 05:08:15 -- rpc/rpc.sh@26 -- # jq length 00:05:59.166 05:08:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:59.166 00:05:59.166 real 0m0.241s 00:05:59.166 user 0m0.149s 00:05:59.166 sys 0m0.043s 00:05:59.166 05:08:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 ************************************ 00:05:59.166 END TEST rpc_integrity 00:05:59.166 ************************************ 00:05:59.166 05:08:15 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:59.166 05:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.166 05:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 ************************************ 00:05:59.166 START TEST rpc_plugins 00:05:59.166 ************************************ 00:05:59.166 05:08:15 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:59.166 05:08:15 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:59.166 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.166 05:08:15 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:59.166 05:08:15 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:59.166 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.166 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.166 05:08:15 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:59.166 { 00:05:59.166 "name": "Malloc1", 00:05:59.166 "aliases": [ 00:05:59.166 "7f45a54e-6c27-46e1-a99d-b68efd8368c9" 00:05:59.166 ], 00:05:59.166 "product_name": "Malloc disk", 00:05:59.166 "block_size": 4096, 00:05:59.166 "num_blocks": 256, 00:05:59.166 "uuid": "7f45a54e-6c27-46e1-a99d-b68efd8368c9", 00:05:59.166 "assigned_rate_limits": { 00:05:59.166 "rw_ios_per_sec": 0, 00:05:59.166 "rw_mbytes_per_sec": 0, 00:05:59.166 "r_mbytes_per_sec": 0, 00:05:59.166 "w_mbytes_per_sec": 0 00:05:59.166 }, 00:05:59.166 "claimed": false, 00:05:59.166 "zoned": false, 00:05:59.166 "supported_io_types": { 00:05:59.166 "read": true, 00:05:59.166 "write": true, 00:05:59.166 "unmap": true, 00:05:59.166 "write_zeroes": true, 00:05:59.166 "flush": true, 00:05:59.166 "reset": true, 00:05:59.166 "compare": false, 00:05:59.166 "compare_and_write": false, 00:05:59.166 "abort": true, 00:05:59.166 "nvme_admin": false, 00:05:59.166 "nvme_io": false 00:05:59.166 }, 00:05:59.166 "memory_domains": [ 00:05:59.166 { 00:05:59.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.166 "dma_device_type": 2 00:05:59.166 } 00:05:59.166 ], 00:05:59.166 "driver_specific": {} 00:05:59.166 } 00:05:59.166 ]' 00:05:59.166 05:08:15 -- rpc/rpc.sh@32 -- # jq length 00:05:59.426 05:08:15 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:59.426 05:08:15 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:59.426 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.426 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.426 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.426 05:08:15 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:59.427 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.427 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.427 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.427 05:08:15 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:59.427 05:08:15 -- rpc/rpc.sh@36 -- # jq length 00:05:59.427 05:08:15 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:59.427 00:05:59.427 real 0m0.134s 00:05:59.427 user 0m0.084s 00:05:59.427 sys 0m0.023s 00:05:59.427 05:08:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.427 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.427 ************************************ 00:05:59.427 END TEST rpc_plugins 00:05:59.427 ************************************ 00:05:59.427 05:08:15 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:59.427 05:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.427 05:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.427 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.427 ************************************ 00:05:59.427 START TEST rpc_trace_cmd_test 00:05:59.427 ************************************ 00:05:59.427 05:08:15 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:59.427 05:08:15 -- rpc/rpc.sh@40 -- # local info 00:05:59.427 05:08:15 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:59.427 05:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.427 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.427 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.427 05:08:15 -- rpc/rpc.sh@42 -- # info='{ 00:05:59.427 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1639019", 00:05:59.427 "tpoint_group_mask": "0x8", 00:05:59.427 "iscsi_conn": { 00:05:59.427 "mask": "0x2", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "scsi": { 00:05:59.427 "mask": "0x4", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "bdev": { 00:05:59.427 "mask": "0x8", 00:05:59.427 "tpoint_mask": "0xffffffffffffffff" 00:05:59.427 }, 00:05:59.427 "nvmf_rdma": { 00:05:59.427 "mask": "0x10", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "nvmf_tcp": { 00:05:59.427 "mask": "0x20", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "ftl": { 00:05:59.427 "mask": "0x40", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "blobfs": { 00:05:59.427 "mask": "0x80", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "dsa": { 00:05:59.427 "mask": "0x200", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "thread": { 00:05:59.427 "mask": "0x400", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "nvme_pcie": { 00:05:59.427 "mask": "0x800", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "iaa": { 00:05:59.427 "mask": "0x1000", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "nvme_tcp": { 00:05:59.427 "mask": "0x2000", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 }, 00:05:59.427 "bdev_nvme": { 00:05:59.427 "mask": "0x4000", 00:05:59.427 "tpoint_mask": "0x0" 00:05:59.427 } 00:05:59.427 }' 00:05:59.427 05:08:15 -- rpc/rpc.sh@43 -- # jq length 00:05:59.427 05:08:15 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:59.427 05:08:15 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:59.427 05:08:15 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:59.427 05:08:15 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:59.686 05:08:16 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:59.686 05:08:16 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:59.686 05:08:16 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:59.686 05:08:16 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:59.686 05:08:16 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:59.686 00:05:59.686 real 0m0.230s 00:05:59.686 user 0m0.187s 00:05:59.686 sys 0m0.036s 00:05:59.686 05:08:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.686 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.686 ************************************ 00:05:59.686 END TEST rpc_trace_cmd_test 00:05:59.686 ************************************ 00:05:59.686 05:08:16 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:59.686 05:08:16 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:59.686 05:08:16 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:59.686 05:08:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.686 05:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.686 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.687 ************************************ 00:05:59.687 START TEST rpc_daemon_integrity 00:05:59.687 ************************************ 00:05:59.687 05:08:16 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:59.687 05:08:16 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:59.687 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.687 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.687 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.687 05:08:16 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:59.687 05:08:16 -- rpc/rpc.sh@13 -- # jq length 00:05:59.687 05:08:16 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:59.687 05:08:16 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:59.687 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.687 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.687 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.687 05:08:16 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:59.687 05:08:16 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:59.687 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.687 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.687 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.687 05:08:16 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:59.687 { 00:05:59.687 "name": "Malloc2", 00:05:59.687 "aliases": [ 00:05:59.687 "96839c93-1d0b-4a41-9e11-91350b616425" 00:05:59.687 ], 00:05:59.687 "product_name": "Malloc disk", 00:05:59.687 "block_size": 512, 00:05:59.687 "num_blocks": 16384, 00:05:59.687 "uuid": "96839c93-1d0b-4a41-9e11-91350b616425", 00:05:59.687 "assigned_rate_limits": { 00:05:59.687 "rw_ios_per_sec": 0, 00:05:59.687 "rw_mbytes_per_sec": 0, 00:05:59.687 "r_mbytes_per_sec": 0, 00:05:59.687 "w_mbytes_per_sec": 0 00:05:59.687 }, 00:05:59.687 "claimed": false, 00:05:59.687 "zoned": false, 00:05:59.687 "supported_io_types": { 00:05:59.687 "read": true, 00:05:59.687 "write": true, 00:05:59.687 "unmap": true, 00:05:59.687 "write_zeroes": true, 00:05:59.687 "flush": true, 00:05:59.687 "reset": true, 00:05:59.687 "compare": false, 00:05:59.687 "compare_and_write": false, 00:05:59.687 "abort": true, 00:05:59.687 "nvme_admin": false, 00:05:59.687 "nvme_io": false 00:05:59.687 }, 00:05:59.687 "memory_domains": [ 00:05:59.687 { 00:05:59.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.687 "dma_device_type": 2 00:05:59.687 } 00:05:59.687 ], 00:05:59.687 "driver_specific": {} 00:05:59.687 } 00:05:59.687 ]' 00:05:59.687 05:08:16 -- rpc/rpc.sh@17 -- # jq length 00:05:59.946 05:08:16 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:59.946 05:08:16 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:59.946 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.946 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.946 [2024-11-19 05:08:16.275053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:59.946 [2024-11-19 05:08:16.275084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:59.947 [2024-11-19 05:08:16.275097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb5ec0 00:05:59.947 [2024-11-19 05:08:16.275106] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:59.947 [2024-11-19 05:08:16.275993] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:59.947 [2024-11-19 05:08:16.276016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:59.947 Passthru0 00:05:59.947 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.947 05:08:16 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:59.947 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.947 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.947 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.947 05:08:16 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:59.947 { 00:05:59.947 "name": "Malloc2", 00:05:59.947 "aliases": [ 00:05:59.947 "96839c93-1d0b-4a41-9e11-91350b616425" 00:05:59.947 ], 00:05:59.947 "product_name": "Malloc disk", 00:05:59.947 "block_size": 512, 00:05:59.947 "num_blocks": 16384, 00:05:59.947 "uuid": "96839c93-1d0b-4a41-9e11-91350b616425", 00:05:59.947 "assigned_rate_limits": { 00:05:59.947 "rw_ios_per_sec": 0, 00:05:59.947 "rw_mbytes_per_sec": 0, 00:05:59.947 "r_mbytes_per_sec": 0, 00:05:59.947 "w_mbytes_per_sec": 0 00:05:59.947 }, 00:05:59.947 "claimed": true, 00:05:59.947 "claim_type": "exclusive_write", 00:05:59.947 "zoned": false, 00:05:59.947 "supported_io_types": { 00:05:59.947 "read": true, 00:05:59.947 "write": true, 00:05:59.947 "unmap": true, 00:05:59.947 "write_zeroes": true, 00:05:59.947 "flush": true, 00:05:59.947 "reset": true, 00:05:59.947 "compare": false, 00:05:59.947 "compare_and_write": false, 00:05:59.947 "abort": true, 00:05:59.947 "nvme_admin": false, 00:05:59.947 "nvme_io": false 00:05:59.947 }, 00:05:59.947 "memory_domains": [ 00:05:59.947 { 00:05:59.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.947 "dma_device_type": 2 00:05:59.947 } 00:05:59.947 ], 00:05:59.947 "driver_specific": {} 00:05:59.947 }, 00:05:59.947 { 00:05:59.947 "name": "Passthru0", 00:05:59.947 "aliases": [ 00:05:59.947 "45a50292-70f0-573e-aa2a-9b42d3c404ee" 00:05:59.947 ], 00:05:59.947 "product_name": "passthru", 00:05:59.947 "block_size": 512, 00:05:59.947 "num_blocks": 16384, 00:05:59.947 "uuid": "45a50292-70f0-573e-aa2a-9b42d3c404ee", 00:05:59.947 "assigned_rate_limits": { 00:05:59.947 "rw_ios_per_sec": 0, 00:05:59.947 "rw_mbytes_per_sec": 0, 00:05:59.947 "r_mbytes_per_sec": 0, 00:05:59.947 "w_mbytes_per_sec": 0 00:05:59.947 }, 00:05:59.947 "claimed": false, 00:05:59.947 "zoned": false, 00:05:59.947 "supported_io_types": { 00:05:59.947 "read": true, 00:05:59.947 "write": true, 00:05:59.947 "unmap": true, 00:05:59.947 "write_zeroes": true, 00:05:59.947 "flush": true, 00:05:59.947 "reset": true, 00:05:59.947 "compare": false, 00:05:59.947 "compare_and_write": false, 00:05:59.947 "abort": true, 00:05:59.947 "nvme_admin": false, 00:05:59.947 "nvme_io": false 00:05:59.947 }, 00:05:59.947 "memory_domains": [ 00:05:59.947 { 00:05:59.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.947 "dma_device_type": 2 00:05:59.947 } 00:05:59.947 ], 00:05:59.947 "driver_specific": { 00:05:59.947 "passthru": { 00:05:59.947 "name": "Passthru0", 00:05:59.947 "base_bdev_name": "Malloc2" 00:05:59.947 } 00:05:59.947 } 00:05:59.947 } 00:05:59.947 ]' 00:05:59.947 05:08:16 -- rpc/rpc.sh@21 -- # jq length 00:05:59.947 05:08:16 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:59.947 05:08:16 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:59.947 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.947 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.947 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.947 05:08:16 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:59.947 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.947 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.947 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.947 05:08:16 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:59.947 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.947 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.947 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.947 05:08:16 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:59.947 05:08:16 -- rpc/rpc.sh@26 -- # jq length 00:05:59.947 05:08:16 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:59.947 00:05:59.947 real 0m0.259s 00:05:59.947 user 0m0.164s 00:05:59.947 sys 0m0.039s 00:05:59.947 05:08:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.947 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:59.947 ************************************ 00:05:59.947 END TEST rpc_daemon_integrity 00:05:59.947 ************************************ 00:05:59.947 05:08:16 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:59.947 05:08:16 -- rpc/rpc.sh@84 -- # killprocess 1639019 00:05:59.947 05:08:16 -- common/autotest_common.sh@936 -- # '[' -z 1639019 ']' 00:05:59.947 05:08:16 -- common/autotest_common.sh@940 -- # kill -0 1639019 00:05:59.947 05:08:16 -- common/autotest_common.sh@941 -- # uname 00:05:59.947 05:08:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.947 05:08:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1639019 00:05:59.947 05:08:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.947 05:08:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.947 05:08:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1639019' 00:05:59.947 killing process with pid 1639019 00:05:59.947 05:08:16 -- common/autotest_common.sh@955 -- # kill 1639019 00:05:59.947 05:08:16 -- common/autotest_common.sh@960 -- # wait 1639019 00:06:00.515 00:06:00.515 real 0m2.445s 00:06:00.515 user 0m3.029s 00:06:00.515 sys 0m0.755s 00:06:00.515 05:08:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.515 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.515 ************************************ 00:06:00.515 END TEST rpc 00:06:00.515 ************************************ 00:06:00.515 05:08:16 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.515 05:08:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.515 05:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.515 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.515 ************************************ 00:06:00.515 START TEST rpc_client 00:06:00.515 ************************************ 00:06:00.515 05:08:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.515 * Looking for test storage... 00:06:00.515 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:00.515 05:08:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:00.515 05:08:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:00.515 05:08:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:00.515 05:08:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:00.515 05:08:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:00.515 05:08:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:00.515 05:08:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:00.515 05:08:17 -- scripts/common.sh@335 -- # IFS=.-: 00:06:00.515 05:08:17 -- scripts/common.sh@335 -- # read -ra ver1 00:06:00.515 05:08:17 -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.515 05:08:17 -- scripts/common.sh@336 -- # read -ra ver2 00:06:00.515 05:08:17 -- scripts/common.sh@337 -- # local 'op=<' 00:06:00.515 05:08:17 -- scripts/common.sh@339 -- # ver1_l=2 00:06:00.515 05:08:17 -- scripts/common.sh@340 -- # ver2_l=1 00:06:00.515 05:08:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:00.515 05:08:17 -- scripts/common.sh@343 -- # case "$op" in 00:06:00.515 05:08:17 -- scripts/common.sh@344 -- # : 1 00:06:00.515 05:08:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:00.515 05:08:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.515 05:08:17 -- scripts/common.sh@364 -- # decimal 1 00:06:00.515 05:08:17 -- scripts/common.sh@352 -- # local d=1 00:06:00.515 05:08:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.515 05:08:17 -- scripts/common.sh@354 -- # echo 1 00:06:00.515 05:08:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:00.515 05:08:17 -- scripts/common.sh@365 -- # decimal 2 00:06:00.515 05:08:17 -- scripts/common.sh@352 -- # local d=2 00:06:00.515 05:08:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.515 05:08:17 -- scripts/common.sh@354 -- # echo 2 00:06:00.515 05:08:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:00.515 05:08:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:00.515 05:08:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:00.515 05:08:17 -- scripts/common.sh@367 -- # return 0 00:06:00.515 05:08:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.515 05:08:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:00.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.515 --rc genhtml_branch_coverage=1 00:06:00.515 --rc genhtml_function_coverage=1 00:06:00.515 --rc genhtml_legend=1 00:06:00.515 --rc geninfo_all_blocks=1 00:06:00.515 --rc geninfo_unexecuted_blocks=1 00:06:00.515 00:06:00.515 ' 00:06:00.515 05:08:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:00.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.515 --rc genhtml_branch_coverage=1 00:06:00.515 --rc genhtml_function_coverage=1 00:06:00.515 --rc genhtml_legend=1 00:06:00.515 --rc geninfo_all_blocks=1 00:06:00.515 --rc geninfo_unexecuted_blocks=1 00:06:00.515 00:06:00.515 ' 00:06:00.515 05:08:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:00.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.515 --rc genhtml_branch_coverage=1 00:06:00.515 --rc genhtml_function_coverage=1 00:06:00.515 --rc genhtml_legend=1 00:06:00.515 --rc geninfo_all_blocks=1 00:06:00.515 --rc geninfo_unexecuted_blocks=1 00:06:00.515 00:06:00.515 ' 00:06:00.515 05:08:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:00.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.515 --rc genhtml_branch_coverage=1 00:06:00.515 --rc genhtml_function_coverage=1 00:06:00.515 --rc genhtml_legend=1 00:06:00.515 --rc geninfo_all_blocks=1 00:06:00.515 --rc geninfo_unexecuted_blocks=1 00:06:00.515 00:06:00.515 ' 00:06:00.515 05:08:17 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:00.515 OK 00:06:00.515 05:08:17 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.515 00:06:00.515 real 0m0.210s 00:06:00.515 user 0m0.123s 00:06:00.515 sys 0m0.100s 00:06:00.515 05:08:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.515 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:00.515 ************************************ 00:06:00.515 END TEST rpc_client 00:06:00.515 ************************************ 00:06:00.775 05:08:17 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.775 05:08:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.775 05:08:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.775 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:00.775 ************************************ 00:06:00.775 START TEST json_config 00:06:00.775 ************************************ 00:06:00.775 05:08:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.775 05:08:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:00.775 05:08:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:00.775 05:08:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:00.775 05:08:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:00.775 05:08:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:00.775 05:08:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:00.775 05:08:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:00.775 05:08:17 -- scripts/common.sh@335 -- # IFS=.-: 00:06:00.775 05:08:17 -- scripts/common.sh@335 -- # read -ra ver1 00:06:00.775 05:08:17 -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.775 05:08:17 -- scripts/common.sh@336 -- # read -ra ver2 00:06:00.775 05:08:17 -- scripts/common.sh@337 -- # local 'op=<' 00:06:00.775 05:08:17 -- scripts/common.sh@339 -- # ver1_l=2 00:06:00.775 05:08:17 -- scripts/common.sh@340 -- # ver2_l=1 00:06:00.775 05:08:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:00.775 05:08:17 -- scripts/common.sh@343 -- # case "$op" in 00:06:00.775 05:08:17 -- scripts/common.sh@344 -- # : 1 00:06:00.775 05:08:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:00.775 05:08:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.775 05:08:17 -- scripts/common.sh@364 -- # decimal 1 00:06:00.775 05:08:17 -- scripts/common.sh@352 -- # local d=1 00:06:00.775 05:08:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.775 05:08:17 -- scripts/common.sh@354 -- # echo 1 00:06:00.775 05:08:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:00.775 05:08:17 -- scripts/common.sh@365 -- # decimal 2 00:06:00.775 05:08:17 -- scripts/common.sh@352 -- # local d=2 00:06:00.775 05:08:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.775 05:08:17 -- scripts/common.sh@354 -- # echo 2 00:06:00.775 05:08:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:00.775 05:08:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:00.775 05:08:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:00.775 05:08:17 -- scripts/common.sh@367 -- # return 0 00:06:00.775 05:08:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.775 05:08:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:00.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.775 --rc genhtml_branch_coverage=1 00:06:00.775 --rc genhtml_function_coverage=1 00:06:00.775 --rc genhtml_legend=1 00:06:00.775 --rc geninfo_all_blocks=1 00:06:00.775 --rc geninfo_unexecuted_blocks=1 00:06:00.775 00:06:00.775 ' 00:06:00.775 05:08:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:00.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.775 --rc genhtml_branch_coverage=1 00:06:00.775 --rc genhtml_function_coverage=1 00:06:00.775 --rc genhtml_legend=1 00:06:00.775 --rc geninfo_all_blocks=1 00:06:00.775 --rc geninfo_unexecuted_blocks=1 00:06:00.775 00:06:00.775 ' 00:06:00.775 05:08:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:00.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.775 --rc genhtml_branch_coverage=1 00:06:00.775 --rc genhtml_function_coverage=1 00:06:00.775 --rc genhtml_legend=1 00:06:00.775 --rc geninfo_all_blocks=1 00:06:00.775 --rc geninfo_unexecuted_blocks=1 00:06:00.775 00:06:00.775 ' 00:06:00.775 05:08:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:00.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.775 --rc genhtml_branch_coverage=1 00:06:00.775 --rc genhtml_function_coverage=1 00:06:00.775 --rc genhtml_legend=1 00:06:00.775 --rc geninfo_all_blocks=1 00:06:00.775 --rc geninfo_unexecuted_blocks=1 00:06:00.775 00:06:00.775 ' 00:06:00.776 05:08:17 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.776 05:08:17 -- nvmf/common.sh@7 -- # uname -s 00:06:00.776 05:08:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.776 05:08:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.776 05:08:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.776 05:08:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.776 05:08:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.776 05:08:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.776 05:08:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.776 05:08:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.776 05:08:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.776 05:08:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.776 05:08:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:00.776 05:08:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:00.776 05:08:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.776 05:08:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.776 05:08:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.776 05:08:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:00.776 05:08:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.776 05:08:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.776 05:08:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.776 05:08:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.776 05:08:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.776 05:08:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.776 05:08:17 -- paths/export.sh@5 -- # export PATH 00:06:00.776 05:08:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.776 05:08:17 -- nvmf/common.sh@46 -- # : 0 00:06:00.776 05:08:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:00.776 05:08:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:00.776 05:08:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:00.776 05:08:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.776 05:08:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.776 05:08:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:00.776 05:08:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:00.776 05:08:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:00.776 05:08:17 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:00.776 05:08:17 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:00.776 05:08:17 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:00.776 05:08:17 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.776 05:08:17 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:00.776 05:08:17 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:00.776 05:08:17 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:00.776 05:08:17 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:00.776 05:08:17 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:00.776 05:08:17 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:00.776 05:08:17 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:00.776 05:08:17 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:00.776 05:08:17 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:00.776 05:08:17 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.776 05:08:17 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:00.776 INFO: JSON configuration test init 00:06:00.776 05:08:17 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:00.776 05:08:17 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:00.776 05:08:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.776 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:00.776 05:08:17 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:00.776 05:08:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.776 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:00.776 05:08:17 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:00.776 05:08:17 -- json_config/json_config.sh@98 -- # local app=target 00:06:00.776 05:08:17 -- json_config/json_config.sh@99 -- # shift 00:06:00.776 05:08:17 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:00.776 05:08:17 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:00.776 05:08:17 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:00.776 05:08:17 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:00.776 05:08:17 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:00.776 05:08:17 -- json_config/json_config.sh@111 -- # app_pid[$app]=1639645 00:06:00.776 05:08:17 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:00.776 Waiting for target to run... 00:06:00.776 05:08:17 -- json_config/json_config.sh@114 -- # waitforlisten 1639645 /var/tmp/spdk_tgt.sock 00:06:00.776 05:08:17 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:00.776 05:08:17 -- common/autotest_common.sh@829 -- # '[' -z 1639645 ']' 00:06:00.776 05:08:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.776 05:08:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.776 05:08:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.776 05:08:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.776 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.035 [2024-11-19 05:08:17.358843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.035 [2024-11-19 05:08:17.358902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639645 ] 00:06:01.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.294 [2024-11-19 05:08:17.797434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.294 [2024-11-19 05:08:17.824686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.294 [2024-11-19 05:08:17.824807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.863 05:08:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.863 05:08:18 -- common/autotest_common.sh@862 -- # return 0 00:06:01.863 05:08:18 -- json_config/json_config.sh@115 -- # echo '' 00:06:01.863 00:06:01.863 05:08:18 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:01.863 05:08:18 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:01.863 05:08:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.863 05:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:01.863 05:08:18 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:01.863 05:08:18 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:01.863 05:08:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.863 05:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:01.863 05:08:18 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:01.863 05:08:18 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:01.863 05:08:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:05.154 05:08:21 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:05.154 05:08:21 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:05.154 05:08:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.154 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 05:08:21 -- json_config/json_config.sh@48 -- # local ret=0 00:06:05.154 05:08:21 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:05.154 05:08:21 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:05.154 05:08:21 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:05.154 05:08:21 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:05.154 05:08:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:05.154 05:08:21 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:05.154 05:08:21 -- json_config/json_config.sh@51 -- # local get_types 00:06:05.154 05:08:21 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:05.154 05:08:21 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:05.154 05:08:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.154 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 05:08:21 -- json_config/json_config.sh@58 -- # return 0 00:06:05.154 05:08:21 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:05.154 05:08:21 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:05.154 05:08:21 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:05.154 05:08:21 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:05.154 05:08:21 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:05.154 05:08:21 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:05.154 05:08:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.154 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 05:08:21 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:05.154 05:08:21 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:06:05.154 05:08:21 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:06:05.154 05:08:21 -- json_config/json_config.sh@287 -- # nvmftestinit 00:06:05.154 05:08:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:06:05.154 05:08:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.154 05:08:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:05.154 05:08:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:05.154 05:08:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:05.154 05:08:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.154 05:08:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:05.154 05:08:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.154 05:08:21 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:06:05.154 05:08:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:05.154 05:08:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:05.154 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:06:11.817 05:08:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:11.817 05:08:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:11.817 05:08:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:11.817 05:08:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:11.817 05:08:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:11.817 05:08:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:11.817 05:08:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:11.817 05:08:27 -- nvmf/common.sh@294 -- # net_devs=() 00:06:11.817 05:08:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:11.817 05:08:27 -- nvmf/common.sh@295 -- # e810=() 00:06:11.817 05:08:27 -- nvmf/common.sh@295 -- # local -ga e810 00:06:11.817 05:08:27 -- nvmf/common.sh@296 -- # x722=() 00:06:11.817 05:08:27 -- nvmf/common.sh@296 -- # local -ga x722 00:06:11.817 05:08:27 -- nvmf/common.sh@297 -- # mlx=() 00:06:11.817 05:08:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:11.817 05:08:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.817 05:08:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:11.817 05:08:27 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:06:11.817 05:08:27 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:06:11.817 05:08:27 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:06:11.817 05:08:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:11.817 05:08:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:11.817 05:08:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:11.817 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:11.817 05:08:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:11.817 05:08:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:11.817 05:08:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:11.817 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:11.817 05:08:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:11.817 05:08:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:11.817 05:08:27 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:11.817 05:08:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.817 05:08:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:11.817 05:08:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.817 05:08:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:11.817 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:11.817 05:08:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.817 05:08:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:11.817 05:08:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.817 05:08:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:11.817 05:08:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.817 05:08:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:11.817 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:11.817 05:08:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.817 05:08:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:11.817 05:08:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:11.817 05:08:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:06:11.817 05:08:27 -- nvmf/common.sh@408 -- # rdma_device_init 00:06:11.817 05:08:27 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:06:11.817 05:08:27 -- nvmf/common.sh@57 -- # uname 00:06:11.817 05:08:27 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:06:11.817 05:08:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:06:11.817 05:08:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:06:11.817 05:08:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:06:11.817 05:08:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:06:11.817 05:08:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:06:11.817 05:08:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:06:11.817 05:08:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:06:11.817 05:08:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:06:11.817 05:08:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:11.817 05:08:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:06:11.817 05:08:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:11.817 05:08:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:11.817 05:08:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:11.817 05:08:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:11.817 05:08:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:11.817 05:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:11.817 05:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:11.817 05:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:11.817 05:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:11.817 05:08:28 -- nvmf/common.sh@104 -- # continue 2 00:06:11.817 05:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:11.817 05:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:11.817 05:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:11.817 05:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:11.817 05:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:11.817 05:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:11.817 05:08:28 -- nvmf/common.sh@104 -- # continue 2 00:06:11.817 05:08:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:11.818 05:08:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:06:11.818 05:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:11.818 05:08:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:06:11.818 05:08:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:06:11.818 05:08:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:06:11.818 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:11.818 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:11.818 altname enp217s0f0np0 00:06:11.818 altname ens818f0np0 00:06:11.818 inet 192.168.100.8/24 scope global mlx_0_0 00:06:11.818 valid_lft forever preferred_lft forever 00:06:11.818 05:08:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:11.818 05:08:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:06:11.818 05:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:11.818 05:08:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:06:11.818 05:08:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:06:11.818 05:08:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:06:11.818 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:11.818 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:11.818 altname enp217s0f1np1 00:06:11.818 altname ens818f1np1 00:06:11.818 inet 192.168.100.9/24 scope global mlx_0_1 00:06:11.818 valid_lft forever preferred_lft forever 00:06:11.818 05:08:28 -- nvmf/common.sh@410 -- # return 0 00:06:11.818 05:08:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:11.818 05:08:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:11.818 05:08:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:06:11.818 05:08:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:06:11.818 05:08:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:06:11.818 05:08:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:11.818 05:08:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:11.818 05:08:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:11.818 05:08:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:11.818 05:08:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:11.818 05:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:11.818 05:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:11.818 05:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:11.818 05:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:11.818 05:08:28 -- nvmf/common.sh@104 -- # continue 2 00:06:11.818 05:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:11.818 05:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:11.818 05:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:11.818 05:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:11.818 05:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:11.818 05:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:11.818 05:08:28 -- nvmf/common.sh@104 -- # continue 2 00:06:11.818 05:08:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:11.818 05:08:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:06:11.818 05:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:11.818 05:08:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:11.818 05:08:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:06:11.818 05:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:11.818 05:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:11.818 05:08:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:06:11.818 192.168.100.9' 00:06:11.818 05:08:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:06:11.818 192.168.100.9' 00:06:11.818 05:08:28 -- nvmf/common.sh@445 -- # head -n 1 00:06:11.818 05:08:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:11.818 05:08:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:11.818 192.168.100.9' 00:06:11.818 05:08:28 -- nvmf/common.sh@446 -- # tail -n +2 00:06:11.818 05:08:28 -- nvmf/common.sh@446 -- # head -n 1 00:06:11.818 05:08:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:11.818 05:08:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:06:11.818 05:08:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:11.818 05:08:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:06:11.818 05:08:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:06:11.818 05:08:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:06:11.818 05:08:28 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:06:11.818 05:08:28 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:11.818 05:08:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:12.078 MallocForNvmf0 00:06:12.078 05:08:28 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:12.078 05:08:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:12.078 MallocForNvmf1 00:06:12.078 05:08:28 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:12.078 05:08:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:12.337 [2024-11-19 05:08:28.796677] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:12.337 [2024-11-19 05:08:28.825448] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2320780/0x232db60) succeed. 00:06:12.337 [2024-11-19 05:08:28.837203] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2322970/0x236f200) succeed. 00:06:12.337 05:08:28 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:12.337 05:08:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:12.597 05:08:29 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.597 05:08:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.862 05:08:29 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.862 05:08:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.862 05:08:29 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:12.862 05:08:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:13.122 [2024-11-19 05:08:29.557242] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:13.122 05:08:29 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:13.122 05:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.122 05:08:29 -- common/autotest_common.sh@10 -- # set +x 00:06:13.122 05:08:29 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:13.122 05:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.122 05:08:29 -- common/autotest_common.sh@10 -- # set +x 00:06:13.122 05:08:29 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:13.122 05:08:29 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.122 05:08:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.381 MallocBdevForConfigChangeCheck 00:06:13.381 05:08:29 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:13.381 05:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.381 05:08:29 -- common/autotest_common.sh@10 -- # set +x 00:06:13.381 05:08:29 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:13.381 05:08:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.641 05:08:30 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:13.641 INFO: shutting down applications... 00:06:13.641 05:08:30 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:13.641 05:08:30 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:13.641 05:08:30 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:13.641 05:08:30 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:16.180 Calling clear_iscsi_subsystem 00:06:16.180 Calling clear_nvmf_subsystem 00:06:16.180 Calling clear_nbd_subsystem 00:06:16.180 Calling clear_ublk_subsystem 00:06:16.180 Calling clear_vhost_blk_subsystem 00:06:16.180 Calling clear_vhost_scsi_subsystem 00:06:16.180 Calling clear_scheduler_subsystem 00:06:16.180 Calling clear_bdev_subsystem 00:06:16.180 Calling clear_accel_subsystem 00:06:16.180 Calling clear_vmd_subsystem 00:06:16.180 Calling clear_sock_subsystem 00:06:16.180 Calling clear_iobuf_subsystem 00:06:16.440 05:08:32 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:16.440 05:08:32 -- json_config/json_config.sh@396 -- # count=100 00:06:16.440 05:08:32 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:16.440 05:08:32 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:16.440 05:08:32 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.440 05:08:32 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:16.700 05:08:33 -- json_config/json_config.sh@398 -- # break 00:06:16.700 05:08:33 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:16.700 05:08:33 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:16.700 05:08:33 -- json_config/json_config.sh@120 -- # local app=target 00:06:16.700 05:08:33 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:16.700 05:08:33 -- json_config/json_config.sh@124 -- # [[ -n 1639645 ]] 00:06:16.700 05:08:33 -- json_config/json_config.sh@127 -- # kill -SIGINT 1639645 00:06:16.700 05:08:33 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:16.700 05:08:33 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:16.700 05:08:33 -- json_config/json_config.sh@130 -- # kill -0 1639645 00:06:16.700 05:08:33 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:17.268 05:08:33 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:17.268 05:08:33 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:17.268 05:08:33 -- json_config/json_config.sh@130 -- # kill -0 1639645 00:06:17.268 05:08:33 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:17.268 05:08:33 -- json_config/json_config.sh@132 -- # break 00:06:17.268 05:08:33 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:17.268 05:08:33 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:17.268 SPDK target shutdown done 00:06:17.268 05:08:33 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:17.268 INFO: relaunching applications... 00:06:17.268 05:08:33 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.268 05:08:33 -- json_config/json_config.sh@98 -- # local app=target 00:06:17.268 05:08:33 -- json_config/json_config.sh@99 -- # shift 00:06:17.268 05:08:33 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:17.268 05:08:33 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:17.268 05:08:33 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:17.268 05:08:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:17.268 05:08:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:17.268 05:08:33 -- json_config/json_config.sh@111 -- # app_pid[$app]=1644722 00:06:17.268 05:08:33 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:17.268 Waiting for target to run... 00:06:17.268 05:08:33 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.268 05:08:33 -- json_config/json_config.sh@114 -- # waitforlisten 1644722 /var/tmp/spdk_tgt.sock 00:06:17.268 05:08:33 -- common/autotest_common.sh@829 -- # '[' -z 1644722 ']' 00:06:17.268 05:08:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.268 05:08:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.268 05:08:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.268 05:08:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.268 05:08:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.268 [2024-11-19 05:08:33.613884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.268 [2024-11-19 05:08:33.613945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644722 ] 00:06:17.268 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.528 [2024-11-19 05:08:34.059039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.528 [2024-11-19 05:08:34.087142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.528 [2024-11-19 05:08:34.087247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.818 [2024-11-19 05:08:37.110842] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1692320/0x169ef00) succeed. 00:06:20.818 [2024-11-19 05:08:37.122652] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1694510/0x171ef40) succeed. 00:06:20.818 [2024-11-19 05:08:37.172423] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:21.387 05:08:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.387 05:08:37 -- common/autotest_common.sh@862 -- # return 0 00:06:21.387 05:08:37 -- json_config/json_config.sh@115 -- # echo '' 00:06:21.387 00:06:21.387 05:08:37 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:21.387 05:08:37 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:21.387 INFO: Checking if target configuration is the same... 00:06:21.387 05:08:37 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:21.387 05:08:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.387 05:08:37 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.387 + '[' 2 -ne 2 ']' 00:06:21.387 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:21.387 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:21.387 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:21.387 +++ basename /dev/fd/62 00:06:21.387 ++ mktemp /tmp/62.XXX 00:06:21.387 + tmp_file_1=/tmp/62.mkW 00:06:21.387 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.387 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:21.387 + tmp_file_2=/tmp/spdk_tgt_config.json.SSR 00:06:21.387 + ret=0 00:06:21.387 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:21.646 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:21.646 + diff -u /tmp/62.mkW /tmp/spdk_tgt_config.json.SSR 00:06:21.646 + echo 'INFO: JSON config files are the same' 00:06:21.646 INFO: JSON config files are the same 00:06:21.646 + rm /tmp/62.mkW /tmp/spdk_tgt_config.json.SSR 00:06:21.646 + exit 0 00:06:21.646 05:08:38 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:21.646 05:08:38 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:21.646 INFO: changing configuration and checking if this can be detected... 00:06:21.646 05:08:38 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:21.646 05:08:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:21.906 05:08:38 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:21.906 05:08:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.906 05:08:38 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.906 + '[' 2 -ne 2 ']' 00:06:21.906 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:21.906 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:21.906 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:21.906 +++ basename /dev/fd/62 00:06:21.906 ++ mktemp /tmp/62.XXX 00:06:21.906 + tmp_file_1=/tmp/62.Cub 00:06:21.906 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.906 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:21.906 + tmp_file_2=/tmp/spdk_tgt_config.json.Hrf 00:06:21.906 + ret=0 00:06:21.906 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.165 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.165 + diff -u /tmp/62.Cub /tmp/spdk_tgt_config.json.Hrf 00:06:22.165 + ret=1 00:06:22.165 + echo '=== Start of file: /tmp/62.Cub ===' 00:06:22.165 + cat /tmp/62.Cub 00:06:22.165 + echo '=== End of file: /tmp/62.Cub ===' 00:06:22.165 + echo '' 00:06:22.165 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Hrf ===' 00:06:22.165 + cat /tmp/spdk_tgt_config.json.Hrf 00:06:22.165 + echo '=== End of file: /tmp/spdk_tgt_config.json.Hrf ===' 00:06:22.165 + echo '' 00:06:22.165 + rm /tmp/62.Cub /tmp/spdk_tgt_config.json.Hrf 00:06:22.165 + exit 1 00:06:22.165 05:08:38 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:22.165 INFO: configuration change detected. 00:06:22.165 05:08:38 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:22.165 05:08:38 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:22.165 05:08:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.165 05:08:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.165 05:08:38 -- json_config/json_config.sh@360 -- # local ret=0 00:06:22.165 05:08:38 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:22.165 05:08:38 -- json_config/json_config.sh@370 -- # [[ -n 1644722 ]] 00:06:22.165 05:08:38 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:22.165 05:08:38 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:22.165 05:08:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.165 05:08:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.165 05:08:38 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:22.165 05:08:38 -- json_config/json_config.sh@246 -- # uname -s 00:06:22.165 05:08:38 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:22.165 05:08:38 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:22.165 05:08:38 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:22.165 05:08:38 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:22.165 05:08:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.165 05:08:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.165 05:08:38 -- json_config/json_config.sh@376 -- # killprocess 1644722 00:06:22.165 05:08:38 -- common/autotest_common.sh@936 -- # '[' -z 1644722 ']' 00:06:22.165 05:08:38 -- common/autotest_common.sh@940 -- # kill -0 1644722 00:06:22.424 05:08:38 -- common/autotest_common.sh@941 -- # uname 00:06:22.424 05:08:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.424 05:08:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1644722 00:06:22.424 05:08:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.424 05:08:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.424 05:08:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1644722' 00:06:22.424 killing process with pid 1644722 00:06:22.424 05:08:38 -- common/autotest_common.sh@955 -- # kill 1644722 00:06:22.424 05:08:38 -- common/autotest_common.sh@960 -- # wait 1644722 00:06:24.960 05:08:41 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.960 05:08:41 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:24.960 05:08:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.960 05:08:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.960 05:08:41 -- json_config/json_config.sh@381 -- # return 0 00:06:24.960 05:08:41 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:24.960 INFO: Success 00:06:24.960 05:08:41 -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:24.960 05:08:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:24.960 05:08:41 -- nvmf/common.sh@116 -- # sync 00:06:24.960 05:08:41 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:06:24.960 05:08:41 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:06:24.960 05:08:41 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:24.960 05:08:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:24.960 05:08:41 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:06:24.960 00:06:24.960 real 0m24.233s 00:06:24.960 user 0m26.948s 00:06:24.960 sys 0m7.522s 00:06:24.960 05:08:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.960 05:08:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.960 ************************************ 00:06:24.960 END TEST json_config 00:06:24.960 ************************************ 00:06:24.960 05:08:41 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:24.960 05:08:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.960 05:08:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.960 05:08:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.960 ************************************ 00:06:24.960 START TEST json_config_extra_key 00:06:24.960 ************************************ 00:06:24.960 05:08:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:24.960 05:08:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:24.960 05:08:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:24.960 05:08:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:24.960 05:08:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:24.960 05:08:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:24.960 05:08:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:24.960 05:08:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:24.960 05:08:41 -- scripts/common.sh@335 -- # IFS=.-: 00:06:24.960 05:08:41 -- scripts/common.sh@335 -- # read -ra ver1 00:06:24.960 05:08:41 -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.960 05:08:41 -- scripts/common.sh@336 -- # read -ra ver2 00:06:24.960 05:08:41 -- scripts/common.sh@337 -- # local 'op=<' 00:06:24.960 05:08:41 -- scripts/common.sh@339 -- # ver1_l=2 00:06:24.960 05:08:41 -- scripts/common.sh@340 -- # ver2_l=1 00:06:24.960 05:08:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:24.960 05:08:41 -- scripts/common.sh@343 -- # case "$op" in 00:06:24.960 05:08:41 -- scripts/common.sh@344 -- # : 1 00:06:24.960 05:08:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:24.960 05:08:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.960 05:08:41 -- scripts/common.sh@364 -- # decimal 1 00:06:24.960 05:08:41 -- scripts/common.sh@352 -- # local d=1 00:06:24.960 05:08:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.960 05:08:41 -- scripts/common.sh@354 -- # echo 1 00:06:24.960 05:08:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:24.960 05:08:41 -- scripts/common.sh@365 -- # decimal 2 00:06:24.960 05:08:41 -- scripts/common.sh@352 -- # local d=2 00:06:24.960 05:08:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.960 05:08:41 -- scripts/common.sh@354 -- # echo 2 00:06:24.960 05:08:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:24.960 05:08:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:24.960 05:08:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:24.960 05:08:41 -- scripts/common.sh@367 -- # return 0 00:06:24.960 05:08:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.960 05:08:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.960 --rc genhtml_branch_coverage=1 00:06:24.960 --rc genhtml_function_coverage=1 00:06:24.960 --rc genhtml_legend=1 00:06:24.960 --rc geninfo_all_blocks=1 00:06:24.960 --rc geninfo_unexecuted_blocks=1 00:06:24.960 00:06:24.960 ' 00:06:24.960 05:08:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.960 --rc genhtml_branch_coverage=1 00:06:24.960 --rc genhtml_function_coverage=1 00:06:24.960 --rc genhtml_legend=1 00:06:24.960 --rc geninfo_all_blocks=1 00:06:24.960 --rc geninfo_unexecuted_blocks=1 00:06:24.960 00:06:24.960 ' 00:06:24.960 05:08:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.960 --rc genhtml_branch_coverage=1 00:06:24.960 --rc genhtml_function_coverage=1 00:06:24.960 --rc genhtml_legend=1 00:06:24.960 --rc geninfo_all_blocks=1 00:06:24.960 --rc geninfo_unexecuted_blocks=1 00:06:24.960 00:06:24.960 ' 00:06:24.960 05:08:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.960 --rc genhtml_branch_coverage=1 00:06:24.960 --rc genhtml_function_coverage=1 00:06:24.960 --rc genhtml_legend=1 00:06:24.960 --rc geninfo_all_blocks=1 00:06:24.960 --rc geninfo_unexecuted_blocks=1 00:06:24.960 00:06:24.960 ' 00:06:24.960 05:08:41 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.960 05:08:41 -- nvmf/common.sh@7 -- # uname -s 00:06:24.960 05:08:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.960 05:08:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.960 05:08:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.960 05:08:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.960 05:08:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.960 05:08:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.960 05:08:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.960 05:08:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.960 05:08:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.960 05:08:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.220 05:08:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:25.220 05:08:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:25.220 05:08:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.220 05:08:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.220 05:08:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.220 05:08:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:25.220 05:08:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.220 05:08:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.220 05:08:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.220 05:08:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.220 05:08:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.220 05:08:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.220 05:08:41 -- paths/export.sh@5 -- # export PATH 00:06:25.220 05:08:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.220 05:08:41 -- nvmf/common.sh@46 -- # : 0 00:06:25.220 05:08:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:25.220 05:08:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:25.220 05:08:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:25.220 05:08:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.220 05:08:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.220 05:08:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:25.220 05:08:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:25.220 05:08:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:25.220 INFO: launching applications... 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1646207 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:25.220 Waiting for target to run... 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1646207 /var/tmp/spdk_tgt.sock 00:06:25.220 05:08:41 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.220 05:08:41 -- common/autotest_common.sh@829 -- # '[' -z 1646207 ']' 00:06:25.220 05:08:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.220 05:08:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.220 05:08:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.220 05:08:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.220 05:08:41 -- common/autotest_common.sh@10 -- # set +x 00:06:25.220 [2024-11-19 05:08:41.565357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.220 [2024-11-19 05:08:41.565422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646207 ] 00:06:25.220 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.480 [2024-11-19 05:08:41.844859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.480 [2024-11-19 05:08:41.864179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.480 [2024-11-19 05:08:41.864296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.048 05:08:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.048 05:08:42 -- common/autotest_common.sh@862 -- # return 0 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:26.048 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:26.048 INFO: shutting down applications... 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1646207 ]] 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1646207 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1646207 00:06:26.048 05:08:42 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1646207 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:26.617 SPDK target shutdown done 00:06:26.617 05:08:42 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:26.617 Success 00:06:26.617 00:06:26.617 real 0m1.508s 00:06:26.617 user 0m1.257s 00:06:26.617 sys 0m0.396s 00:06:26.617 05:08:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.617 05:08:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.617 ************************************ 00:06:26.617 END TEST json_config_extra_key 00:06:26.617 ************************************ 00:06:26.617 05:08:42 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.617 05:08:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.618 05:08:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.618 05:08:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.618 ************************************ 00:06:26.618 START TEST alias_rpc 00:06:26.618 ************************************ 00:06:26.618 05:08:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.618 * Looking for test storage... 00:06:26.618 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:26.618 05:08:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:26.618 05:08:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:26.618 05:08:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:26.618 05:08:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:26.618 05:08:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:26.618 05:08:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:26.618 05:08:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:26.618 05:08:43 -- scripts/common.sh@335 -- # IFS=.-: 00:06:26.618 05:08:43 -- scripts/common.sh@335 -- # read -ra ver1 00:06:26.618 05:08:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.618 05:08:43 -- scripts/common.sh@336 -- # read -ra ver2 00:06:26.618 05:08:43 -- scripts/common.sh@337 -- # local 'op=<' 00:06:26.618 05:08:43 -- scripts/common.sh@339 -- # ver1_l=2 00:06:26.618 05:08:43 -- scripts/common.sh@340 -- # ver2_l=1 00:06:26.618 05:08:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:26.618 05:08:43 -- scripts/common.sh@343 -- # case "$op" in 00:06:26.618 05:08:43 -- scripts/common.sh@344 -- # : 1 00:06:26.618 05:08:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:26.618 05:08:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.618 05:08:43 -- scripts/common.sh@364 -- # decimal 1 00:06:26.618 05:08:43 -- scripts/common.sh@352 -- # local d=1 00:06:26.618 05:08:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.618 05:08:43 -- scripts/common.sh@354 -- # echo 1 00:06:26.618 05:08:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:26.618 05:08:43 -- scripts/common.sh@365 -- # decimal 2 00:06:26.618 05:08:43 -- scripts/common.sh@352 -- # local d=2 00:06:26.618 05:08:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.618 05:08:43 -- scripts/common.sh@354 -- # echo 2 00:06:26.618 05:08:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:26.618 05:08:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:26.618 05:08:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:26.618 05:08:43 -- scripts/common.sh@367 -- # return 0 00:06:26.618 05:08:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.618 05:08:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.618 --rc genhtml_branch_coverage=1 00:06:26.618 --rc genhtml_function_coverage=1 00:06:26.618 --rc genhtml_legend=1 00:06:26.618 --rc geninfo_all_blocks=1 00:06:26.618 --rc geninfo_unexecuted_blocks=1 00:06:26.618 00:06:26.618 ' 00:06:26.618 05:08:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.618 --rc genhtml_branch_coverage=1 00:06:26.618 --rc genhtml_function_coverage=1 00:06:26.618 --rc genhtml_legend=1 00:06:26.618 --rc geninfo_all_blocks=1 00:06:26.618 --rc geninfo_unexecuted_blocks=1 00:06:26.618 00:06:26.618 ' 00:06:26.618 05:08:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.618 --rc genhtml_branch_coverage=1 00:06:26.618 --rc genhtml_function_coverage=1 00:06:26.618 --rc genhtml_legend=1 00:06:26.618 --rc geninfo_all_blocks=1 00:06:26.618 --rc geninfo_unexecuted_blocks=1 00:06:26.618 00:06:26.618 ' 00:06:26.618 05:08:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.618 --rc genhtml_branch_coverage=1 00:06:26.618 --rc genhtml_function_coverage=1 00:06:26.618 --rc genhtml_legend=1 00:06:26.618 --rc geninfo_all_blocks=1 00:06:26.618 --rc geninfo_unexecuted_blocks=1 00:06:26.618 00:06:26.618 ' 00:06:26.618 05:08:43 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.618 05:08:43 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1646540 00:06:26.618 05:08:43 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.618 05:08:43 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1646540 00:06:26.618 05:08:43 -- common/autotest_common.sh@829 -- # '[' -z 1646540 ']' 00:06:26.618 05:08:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.618 05:08:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.618 05:08:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.618 05:08:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.618 05:08:43 -- common/autotest_common.sh@10 -- # set +x 00:06:26.618 [2024-11-19 05:08:43.153862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.618 [2024-11-19 05:08:43.153921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646540 ] 00:06:26.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.877 [2024-11-19 05:08:43.223266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.877 [2024-11-19 05:08:43.260476] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.877 [2024-11-19 05:08:43.260628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.445 05:08:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.445 05:08:43 -- common/autotest_common.sh@862 -- # return 0 00:06:27.445 05:08:43 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:27.705 05:08:44 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1646540 00:06:27.705 05:08:44 -- common/autotest_common.sh@936 -- # '[' -z 1646540 ']' 00:06:27.705 05:08:44 -- common/autotest_common.sh@940 -- # kill -0 1646540 00:06:27.705 05:08:44 -- common/autotest_common.sh@941 -- # uname 00:06:27.705 05:08:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.705 05:08:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1646540 00:06:27.705 05:08:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.705 05:08:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.705 05:08:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1646540' 00:06:27.705 killing process with pid 1646540 00:06:27.705 05:08:44 -- common/autotest_common.sh@955 -- # kill 1646540 00:06:27.705 05:08:44 -- common/autotest_common.sh@960 -- # wait 1646540 00:06:27.964 00:06:27.964 real 0m1.579s 00:06:27.964 user 0m1.681s 00:06:27.964 sys 0m0.466s 00:06:27.964 05:08:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.964 05:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:27.964 ************************************ 00:06:27.964 END TEST alias_rpc 00:06:27.964 ************************************ 00:06:28.223 05:08:44 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:28.223 05:08:44 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.223 05:08:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.223 05:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.223 05:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:28.223 ************************************ 00:06:28.223 START TEST spdkcli_tcp 00:06:28.223 ************************************ 00:06:28.223 05:08:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.223 * Looking for test storage... 00:06:28.223 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:28.223 05:08:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:28.223 05:08:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:28.223 05:08:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:28.223 05:08:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:28.223 05:08:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:28.223 05:08:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:28.223 05:08:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:28.223 05:08:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:28.223 05:08:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:28.223 05:08:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.223 05:08:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:28.223 05:08:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:28.223 05:08:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:28.223 05:08:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:28.223 05:08:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:28.223 05:08:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:28.223 05:08:44 -- scripts/common.sh@344 -- # : 1 00:06:28.223 05:08:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:28.223 05:08:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.223 05:08:44 -- scripts/common.sh@364 -- # decimal 1 00:06:28.223 05:08:44 -- scripts/common.sh@352 -- # local d=1 00:06:28.223 05:08:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.223 05:08:44 -- scripts/common.sh@354 -- # echo 1 00:06:28.223 05:08:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:28.223 05:08:44 -- scripts/common.sh@365 -- # decimal 2 00:06:28.223 05:08:44 -- scripts/common.sh@352 -- # local d=2 00:06:28.223 05:08:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.223 05:08:44 -- scripts/common.sh@354 -- # echo 2 00:06:28.223 05:08:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:28.223 05:08:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:28.223 05:08:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:28.223 05:08:44 -- scripts/common.sh@367 -- # return 0 00:06:28.223 05:08:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.223 05:08:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:28.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.223 --rc genhtml_branch_coverage=1 00:06:28.223 --rc genhtml_function_coverage=1 00:06:28.223 --rc genhtml_legend=1 00:06:28.223 --rc geninfo_all_blocks=1 00:06:28.223 --rc geninfo_unexecuted_blocks=1 00:06:28.223 00:06:28.223 ' 00:06:28.223 05:08:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:28.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.223 --rc genhtml_branch_coverage=1 00:06:28.223 --rc genhtml_function_coverage=1 00:06:28.223 --rc genhtml_legend=1 00:06:28.223 --rc geninfo_all_blocks=1 00:06:28.223 --rc geninfo_unexecuted_blocks=1 00:06:28.223 00:06:28.223 ' 00:06:28.223 05:08:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:28.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.223 --rc genhtml_branch_coverage=1 00:06:28.223 --rc genhtml_function_coverage=1 00:06:28.223 --rc genhtml_legend=1 00:06:28.223 --rc geninfo_all_blocks=1 00:06:28.223 --rc geninfo_unexecuted_blocks=1 00:06:28.223 00:06:28.223 ' 00:06:28.223 05:08:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:28.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.223 --rc genhtml_branch_coverage=1 00:06:28.223 --rc genhtml_function_coverage=1 00:06:28.223 --rc genhtml_legend=1 00:06:28.223 --rc geninfo_all_blocks=1 00:06:28.223 --rc geninfo_unexecuted_blocks=1 00:06:28.223 00:06:28.223 ' 00:06:28.223 05:08:44 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:28.223 05:08:44 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:28.224 05:08:44 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:28.224 05:08:44 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:28.224 05:08:44 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:28.224 05:08:44 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:28.224 05:08:44 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:28.224 05:08:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.224 05:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:28.224 05:08:44 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1646914 00:06:28.224 05:08:44 -- spdkcli/tcp.sh@27 -- # waitforlisten 1646914 00:06:28.224 05:08:44 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:28.224 05:08:44 -- common/autotest_common.sh@829 -- # '[' -z 1646914 ']' 00:06:28.224 05:08:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.224 05:08:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.224 05:08:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.224 05:08:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.224 05:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:28.483 [2024-11-19 05:08:44.795755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.483 [2024-11-19 05:08:44.795812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646914 ] 00:06:28.483 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.483 [2024-11-19 05:08:44.868780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.483 [2024-11-19 05:08:44.905945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.483 [2024-11-19 05:08:44.906112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.483 [2024-11-19 05:08:44.906115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.051 05:08:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.051 05:08:45 -- common/autotest_common.sh@862 -- # return 0 00:06:29.051 05:08:45 -- spdkcli/tcp.sh@31 -- # socat_pid=1647130 00:06:29.051 05:08:45 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.051 05:08:45 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:29.311 [ 00:06:29.311 "bdev_malloc_delete", 00:06:29.311 "bdev_malloc_create", 00:06:29.311 "bdev_null_resize", 00:06:29.311 "bdev_null_delete", 00:06:29.311 "bdev_null_create", 00:06:29.311 "bdev_nvme_cuse_unregister", 00:06:29.311 "bdev_nvme_cuse_register", 00:06:29.311 "bdev_opal_new_user", 00:06:29.311 "bdev_opal_set_lock_state", 00:06:29.311 "bdev_opal_delete", 00:06:29.311 "bdev_opal_get_info", 00:06:29.311 "bdev_opal_create", 00:06:29.311 "bdev_nvme_opal_revert", 00:06:29.311 "bdev_nvme_opal_init", 00:06:29.311 "bdev_nvme_send_cmd", 00:06:29.311 "bdev_nvme_get_path_iostat", 00:06:29.311 "bdev_nvme_get_mdns_discovery_info", 00:06:29.311 "bdev_nvme_stop_mdns_discovery", 00:06:29.311 "bdev_nvme_start_mdns_discovery", 00:06:29.311 "bdev_nvme_set_multipath_policy", 00:06:29.311 "bdev_nvme_set_preferred_path", 00:06:29.311 "bdev_nvme_get_io_paths", 00:06:29.311 "bdev_nvme_remove_error_injection", 00:06:29.311 "bdev_nvme_add_error_injection", 00:06:29.311 "bdev_nvme_get_discovery_info", 00:06:29.311 "bdev_nvme_stop_discovery", 00:06:29.311 "bdev_nvme_start_discovery", 00:06:29.311 "bdev_nvme_get_controller_health_info", 00:06:29.311 "bdev_nvme_disable_controller", 00:06:29.311 "bdev_nvme_enable_controller", 00:06:29.311 "bdev_nvme_reset_controller", 00:06:29.311 "bdev_nvme_get_transport_statistics", 00:06:29.311 "bdev_nvme_apply_firmware", 00:06:29.311 "bdev_nvme_detach_controller", 00:06:29.311 "bdev_nvme_get_controllers", 00:06:29.311 "bdev_nvme_attach_controller", 00:06:29.311 "bdev_nvme_set_hotplug", 00:06:29.311 "bdev_nvme_set_options", 00:06:29.311 "bdev_passthru_delete", 00:06:29.311 "bdev_passthru_create", 00:06:29.311 "bdev_lvol_grow_lvstore", 00:06:29.311 "bdev_lvol_get_lvols", 00:06:29.311 "bdev_lvol_get_lvstores", 00:06:29.311 "bdev_lvol_delete", 00:06:29.311 "bdev_lvol_set_read_only", 00:06:29.311 "bdev_lvol_resize", 00:06:29.311 "bdev_lvol_decouple_parent", 00:06:29.311 "bdev_lvol_inflate", 00:06:29.311 "bdev_lvol_rename", 00:06:29.311 "bdev_lvol_clone_bdev", 00:06:29.311 "bdev_lvol_clone", 00:06:29.311 "bdev_lvol_snapshot", 00:06:29.311 "bdev_lvol_create", 00:06:29.311 "bdev_lvol_delete_lvstore", 00:06:29.311 "bdev_lvol_rename_lvstore", 00:06:29.311 "bdev_lvol_create_lvstore", 00:06:29.311 "bdev_raid_set_options", 00:06:29.311 "bdev_raid_remove_base_bdev", 00:06:29.311 "bdev_raid_add_base_bdev", 00:06:29.311 "bdev_raid_delete", 00:06:29.311 "bdev_raid_create", 00:06:29.311 "bdev_raid_get_bdevs", 00:06:29.311 "bdev_error_inject_error", 00:06:29.311 "bdev_error_delete", 00:06:29.311 "bdev_error_create", 00:06:29.311 "bdev_split_delete", 00:06:29.311 "bdev_split_create", 00:06:29.311 "bdev_delay_delete", 00:06:29.311 "bdev_delay_create", 00:06:29.311 "bdev_delay_update_latency", 00:06:29.311 "bdev_zone_block_delete", 00:06:29.311 "bdev_zone_block_create", 00:06:29.311 "blobfs_create", 00:06:29.311 "blobfs_detect", 00:06:29.311 "blobfs_set_cache_size", 00:06:29.311 "bdev_aio_delete", 00:06:29.311 "bdev_aio_rescan", 00:06:29.311 "bdev_aio_create", 00:06:29.311 "bdev_ftl_set_property", 00:06:29.311 "bdev_ftl_get_properties", 00:06:29.311 "bdev_ftl_get_stats", 00:06:29.311 "bdev_ftl_unmap", 00:06:29.311 "bdev_ftl_unload", 00:06:29.311 "bdev_ftl_delete", 00:06:29.311 "bdev_ftl_load", 00:06:29.311 "bdev_ftl_create", 00:06:29.311 "bdev_virtio_attach_controller", 00:06:29.311 "bdev_virtio_scsi_get_devices", 00:06:29.311 "bdev_virtio_detach_controller", 00:06:29.311 "bdev_virtio_blk_set_hotplug", 00:06:29.311 "bdev_iscsi_delete", 00:06:29.311 "bdev_iscsi_create", 00:06:29.311 "bdev_iscsi_set_options", 00:06:29.311 "accel_error_inject_error", 00:06:29.311 "ioat_scan_accel_module", 00:06:29.311 "dsa_scan_accel_module", 00:06:29.311 "iaa_scan_accel_module", 00:06:29.311 "iscsi_set_options", 00:06:29.311 "iscsi_get_auth_groups", 00:06:29.311 "iscsi_auth_group_remove_secret", 00:06:29.311 "iscsi_auth_group_add_secret", 00:06:29.311 "iscsi_delete_auth_group", 00:06:29.311 "iscsi_create_auth_group", 00:06:29.311 "iscsi_set_discovery_auth", 00:06:29.311 "iscsi_get_options", 00:06:29.311 "iscsi_target_node_request_logout", 00:06:29.311 "iscsi_target_node_set_redirect", 00:06:29.311 "iscsi_target_node_set_auth", 00:06:29.311 "iscsi_target_node_add_lun", 00:06:29.311 "iscsi_get_connections", 00:06:29.311 "iscsi_portal_group_set_auth", 00:06:29.311 "iscsi_start_portal_group", 00:06:29.311 "iscsi_delete_portal_group", 00:06:29.311 "iscsi_create_portal_group", 00:06:29.311 "iscsi_get_portal_groups", 00:06:29.311 "iscsi_delete_target_node", 00:06:29.311 "iscsi_target_node_remove_pg_ig_maps", 00:06:29.311 "iscsi_target_node_add_pg_ig_maps", 00:06:29.311 "iscsi_create_target_node", 00:06:29.311 "iscsi_get_target_nodes", 00:06:29.311 "iscsi_delete_initiator_group", 00:06:29.311 "iscsi_initiator_group_remove_initiators", 00:06:29.311 "iscsi_initiator_group_add_initiators", 00:06:29.311 "iscsi_create_initiator_group", 00:06:29.311 "iscsi_get_initiator_groups", 00:06:29.311 "nvmf_set_crdt", 00:06:29.311 "nvmf_set_config", 00:06:29.311 "nvmf_set_max_subsystems", 00:06:29.311 "nvmf_subsystem_get_listeners", 00:06:29.311 "nvmf_subsystem_get_qpairs", 00:06:29.311 "nvmf_subsystem_get_controllers", 00:06:29.311 "nvmf_get_stats", 00:06:29.311 "nvmf_get_transports", 00:06:29.311 "nvmf_create_transport", 00:06:29.311 "nvmf_get_targets", 00:06:29.311 "nvmf_delete_target", 00:06:29.311 "nvmf_create_target", 00:06:29.311 "nvmf_subsystem_allow_any_host", 00:06:29.311 "nvmf_subsystem_remove_host", 00:06:29.311 "nvmf_subsystem_add_host", 00:06:29.311 "nvmf_subsystem_remove_ns", 00:06:29.311 "nvmf_subsystem_add_ns", 00:06:29.311 "nvmf_subsystem_listener_set_ana_state", 00:06:29.311 "nvmf_discovery_get_referrals", 00:06:29.311 "nvmf_discovery_remove_referral", 00:06:29.311 "nvmf_discovery_add_referral", 00:06:29.311 "nvmf_subsystem_remove_listener", 00:06:29.311 "nvmf_subsystem_add_listener", 00:06:29.311 "nvmf_delete_subsystem", 00:06:29.311 "nvmf_create_subsystem", 00:06:29.311 "nvmf_get_subsystems", 00:06:29.311 "env_dpdk_get_mem_stats", 00:06:29.311 "nbd_get_disks", 00:06:29.311 "nbd_stop_disk", 00:06:29.311 "nbd_start_disk", 00:06:29.311 "ublk_recover_disk", 00:06:29.311 "ublk_get_disks", 00:06:29.311 "ublk_stop_disk", 00:06:29.311 "ublk_start_disk", 00:06:29.311 "ublk_destroy_target", 00:06:29.311 "ublk_create_target", 00:06:29.311 "virtio_blk_create_transport", 00:06:29.311 "virtio_blk_get_transports", 00:06:29.311 "vhost_controller_set_coalescing", 00:06:29.311 "vhost_get_controllers", 00:06:29.311 "vhost_delete_controller", 00:06:29.311 "vhost_create_blk_controller", 00:06:29.311 "vhost_scsi_controller_remove_target", 00:06:29.311 "vhost_scsi_controller_add_target", 00:06:29.311 "vhost_start_scsi_controller", 00:06:29.311 "vhost_create_scsi_controller", 00:06:29.311 "thread_set_cpumask", 00:06:29.311 "framework_get_scheduler", 00:06:29.311 "framework_set_scheduler", 00:06:29.311 "framework_get_reactors", 00:06:29.311 "thread_get_io_channels", 00:06:29.311 "thread_get_pollers", 00:06:29.311 "thread_get_stats", 00:06:29.311 "framework_monitor_context_switch", 00:06:29.311 "spdk_kill_instance", 00:06:29.311 "log_enable_timestamps", 00:06:29.311 "log_get_flags", 00:06:29.311 "log_clear_flag", 00:06:29.311 "log_set_flag", 00:06:29.311 "log_get_level", 00:06:29.311 "log_set_level", 00:06:29.311 "log_get_print_level", 00:06:29.311 "log_set_print_level", 00:06:29.311 "framework_enable_cpumask_locks", 00:06:29.311 "framework_disable_cpumask_locks", 00:06:29.311 "framework_wait_init", 00:06:29.311 "framework_start_init", 00:06:29.311 "scsi_get_devices", 00:06:29.311 "bdev_get_histogram", 00:06:29.311 "bdev_enable_histogram", 00:06:29.311 "bdev_set_qos_limit", 00:06:29.312 "bdev_set_qd_sampling_period", 00:06:29.312 "bdev_get_bdevs", 00:06:29.312 "bdev_reset_iostat", 00:06:29.312 "bdev_get_iostat", 00:06:29.312 "bdev_examine", 00:06:29.312 "bdev_wait_for_examine", 00:06:29.312 "bdev_set_options", 00:06:29.312 "notify_get_notifications", 00:06:29.312 "notify_get_types", 00:06:29.312 "accel_get_stats", 00:06:29.312 "accel_set_options", 00:06:29.312 "accel_set_driver", 00:06:29.312 "accel_crypto_key_destroy", 00:06:29.312 "accel_crypto_keys_get", 00:06:29.312 "accel_crypto_key_create", 00:06:29.312 "accel_assign_opc", 00:06:29.312 "accel_get_module_info", 00:06:29.312 "accel_get_opc_assignments", 00:06:29.312 "vmd_rescan", 00:06:29.312 "vmd_remove_device", 00:06:29.312 "vmd_enable", 00:06:29.312 "sock_set_default_impl", 00:06:29.312 "sock_impl_set_options", 00:06:29.312 "sock_impl_get_options", 00:06:29.312 "iobuf_get_stats", 00:06:29.312 "iobuf_set_options", 00:06:29.312 "framework_get_pci_devices", 00:06:29.312 "framework_get_config", 00:06:29.312 "framework_get_subsystems", 00:06:29.312 "trace_get_info", 00:06:29.312 "trace_get_tpoint_group_mask", 00:06:29.312 "trace_disable_tpoint_group", 00:06:29.312 "trace_enable_tpoint_group", 00:06:29.312 "trace_clear_tpoint_mask", 00:06:29.312 "trace_set_tpoint_mask", 00:06:29.312 "spdk_get_version", 00:06:29.312 "rpc_get_methods" 00:06:29.312 ] 00:06:29.312 05:08:45 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:29.312 05:08:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.312 05:08:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.312 05:08:45 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:29.312 05:08:45 -- spdkcli/tcp.sh@38 -- # killprocess 1646914 00:06:29.312 05:08:45 -- common/autotest_common.sh@936 -- # '[' -z 1646914 ']' 00:06:29.312 05:08:45 -- common/autotest_common.sh@940 -- # kill -0 1646914 00:06:29.312 05:08:45 -- common/autotest_common.sh@941 -- # uname 00:06:29.312 05:08:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.312 05:08:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1646914 00:06:29.571 05:08:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.571 05:08:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.571 05:08:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1646914' 00:06:29.571 killing process with pid 1646914 00:06:29.571 05:08:45 -- common/autotest_common.sh@955 -- # kill 1646914 00:06:29.571 05:08:45 -- common/autotest_common.sh@960 -- # wait 1646914 00:06:29.831 00:06:29.831 real 0m1.627s 00:06:29.831 user 0m2.916s 00:06:29.831 sys 0m0.548s 00:06:29.831 05:08:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.831 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:06:29.831 ************************************ 00:06:29.831 END TEST spdkcli_tcp 00:06:29.831 ************************************ 00:06:29.831 05:08:46 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.831 05:08:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.831 05:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.831 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:06:29.831 ************************************ 00:06:29.831 START TEST dpdk_mem_utility 00:06:29.831 ************************************ 00:06:29.831 05:08:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.831 * Looking for test storage... 00:06:29.831 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:29.831 05:08:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:29.831 05:08:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:29.831 05:08:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:30.090 05:08:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:30.090 05:08:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:30.090 05:08:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:30.090 05:08:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:30.090 05:08:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:30.090 05:08:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:30.091 05:08:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.091 05:08:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:30.091 05:08:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:30.091 05:08:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:30.091 05:08:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:30.091 05:08:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:30.091 05:08:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:30.091 05:08:46 -- scripts/common.sh@344 -- # : 1 00:06:30.091 05:08:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:30.091 05:08:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.091 05:08:46 -- scripts/common.sh@364 -- # decimal 1 00:06:30.091 05:08:46 -- scripts/common.sh@352 -- # local d=1 00:06:30.091 05:08:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.091 05:08:46 -- scripts/common.sh@354 -- # echo 1 00:06:30.091 05:08:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:30.091 05:08:46 -- scripts/common.sh@365 -- # decimal 2 00:06:30.091 05:08:46 -- scripts/common.sh@352 -- # local d=2 00:06:30.091 05:08:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.091 05:08:46 -- scripts/common.sh@354 -- # echo 2 00:06:30.091 05:08:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:30.091 05:08:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:30.091 05:08:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:30.091 05:08:46 -- scripts/common.sh@367 -- # return 0 00:06:30.091 05:08:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.091 05:08:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:30.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.091 --rc genhtml_branch_coverage=1 00:06:30.091 --rc genhtml_function_coverage=1 00:06:30.091 --rc genhtml_legend=1 00:06:30.091 --rc geninfo_all_blocks=1 00:06:30.091 --rc geninfo_unexecuted_blocks=1 00:06:30.091 00:06:30.091 ' 00:06:30.091 05:08:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:30.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.091 --rc genhtml_branch_coverage=1 00:06:30.091 --rc genhtml_function_coverage=1 00:06:30.091 --rc genhtml_legend=1 00:06:30.091 --rc geninfo_all_blocks=1 00:06:30.091 --rc geninfo_unexecuted_blocks=1 00:06:30.091 00:06:30.091 ' 00:06:30.091 05:08:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:30.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.091 --rc genhtml_branch_coverage=1 00:06:30.091 --rc genhtml_function_coverage=1 00:06:30.091 --rc genhtml_legend=1 00:06:30.091 --rc geninfo_all_blocks=1 00:06:30.091 --rc geninfo_unexecuted_blocks=1 00:06:30.091 00:06:30.091 ' 00:06:30.091 05:08:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:30.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.091 --rc genhtml_branch_coverage=1 00:06:30.091 --rc genhtml_function_coverage=1 00:06:30.091 --rc genhtml_legend=1 00:06:30.091 --rc geninfo_all_blocks=1 00:06:30.091 --rc geninfo_unexecuted_blocks=1 00:06:30.091 00:06:30.091 ' 00:06:30.091 05:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:30.091 05:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.091 05:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1647344 00:06:30.091 05:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1647344 00:06:30.091 05:08:46 -- common/autotest_common.sh@829 -- # '[' -z 1647344 ']' 00:06:30.091 05:08:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.091 05:08:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.091 05:08:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.091 05:08:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.091 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:06:30.091 [2024-11-19 05:08:46.450211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.091 [2024-11-19 05:08:46.450265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647344 ] 00:06:30.091 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.091 [2024-11-19 05:08:46.516451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.091 [2024-11-19 05:08:46.553497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.091 [2024-11-19 05:08:46.553623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.029 05:08:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.029 05:08:47 -- common/autotest_common.sh@862 -- # return 0 00:06:31.029 05:08:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:31.029 05:08:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:31.029 05:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.029 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:06:31.029 { 00:06:31.029 "filename": "/tmp/spdk_mem_dump.txt" 00:06:31.029 } 00:06:31.029 05:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.029 05:08:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:31.029 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:31.029 1 heaps totaling size 814.000000 MiB 00:06:31.029 size: 814.000000 MiB heap id: 0 00:06:31.029 end heaps---------- 00:06:31.029 8 mempools totaling size 598.116089 MiB 00:06:31.029 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:31.029 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:31.029 size: 84.521057 MiB name: bdev_io_1647344 00:06:31.029 size: 51.011292 MiB name: evtpool_1647344 00:06:31.029 size: 50.003479 MiB name: msgpool_1647344 00:06:31.029 size: 21.763794 MiB name: PDU_Pool 00:06:31.029 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:31.029 size: 0.026123 MiB name: Session_Pool 00:06:31.029 end mempools------- 00:06:31.029 6 memzones totaling size 4.142822 MiB 00:06:31.029 size: 1.000366 MiB name: RG_ring_0_1647344 00:06:31.029 size: 1.000366 MiB name: RG_ring_1_1647344 00:06:31.029 size: 1.000366 MiB name: RG_ring_4_1647344 00:06:31.029 size: 1.000366 MiB name: RG_ring_5_1647344 00:06:31.029 size: 0.125366 MiB name: RG_ring_2_1647344 00:06:31.029 size: 0.015991 MiB name: RG_ring_3_1647344 00:06:31.029 end memzones------- 00:06:31.029 05:08:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:31.029 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:31.029 list of free elements. size: 12.519348 MiB 00:06:31.029 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:31.029 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:31.029 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:31.029 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:31.029 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:31.029 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:31.029 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:31.029 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:31.029 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:31.029 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:31.029 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:31.029 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:31.029 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:31.029 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:31.029 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:31.029 list of standard malloc elements. size: 199.218079 MiB 00:06:31.029 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:31.029 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:31.029 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:31.029 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:31.029 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:31.029 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:31.029 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:31.029 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:31.029 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:31.029 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:31.029 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:31.029 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:31.029 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:31.029 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:31.029 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:31.029 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:31.029 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:31.029 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:31.029 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:31.029 list of memzone associated elements. size: 602.262573 MiB 00:06:31.029 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:31.029 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:31.029 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:31.029 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:31.029 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:31.029 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1647344_0 00:06:31.029 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:31.029 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1647344_0 00:06:31.029 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:31.029 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1647344_0 00:06:31.029 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:31.029 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:31.029 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:31.029 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:31.029 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:31.029 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1647344 00:06:31.029 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:31.029 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1647344 00:06:31.029 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:31.030 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1647344 00:06:31.030 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:31.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:31.030 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:31.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:31.030 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:31.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:31.030 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:31.030 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:31.030 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:31.030 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1647344 00:06:31.030 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:31.030 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1647344 00:06:31.030 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:31.030 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1647344 00:06:31.030 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:31.030 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1647344 00:06:31.030 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:31.030 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1647344 00:06:31.030 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:31.030 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:31.030 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:31.030 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:31.030 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:31.030 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:31.030 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:31.030 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1647344 00:06:31.030 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:31.030 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:31.030 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:31.030 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:31.030 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:31.030 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1647344 00:06:31.030 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:31.030 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:31.030 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:31.030 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1647344 00:06:31.030 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:31.030 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1647344 00:06:31.030 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:31.030 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:31.030 05:08:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:31.030 05:08:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1647344 00:06:31.030 05:08:47 -- common/autotest_common.sh@936 -- # '[' -z 1647344 ']' 00:06:31.030 05:08:47 -- common/autotest_common.sh@940 -- # kill -0 1647344 00:06:31.030 05:08:47 -- common/autotest_common.sh@941 -- # uname 00:06:31.030 05:08:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.030 05:08:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1647344 00:06:31.030 05:08:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.030 05:08:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.030 05:08:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1647344' 00:06:31.030 killing process with pid 1647344 00:06:31.030 05:08:47 -- common/autotest_common.sh@955 -- # kill 1647344 00:06:31.030 05:08:47 -- common/autotest_common.sh@960 -- # wait 1647344 00:06:31.289 00:06:31.289 real 0m1.503s 00:06:31.289 user 0m1.533s 00:06:31.289 sys 0m0.473s 00:06:31.289 05:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.289 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:06:31.289 ************************************ 00:06:31.289 END TEST dpdk_mem_utility 00:06:31.289 ************************************ 00:06:31.289 05:08:47 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:31.289 05:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.289 05:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.289 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:06:31.289 ************************************ 00:06:31.289 START TEST event 00:06:31.289 ************************************ 00:06:31.289 05:08:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:31.549 * Looking for test storage... 00:06:31.550 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:31.550 05:08:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:31.550 05:08:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:31.550 05:08:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:31.550 05:08:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:31.550 05:08:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:31.550 05:08:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:31.550 05:08:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:31.550 05:08:47 -- scripts/common.sh@335 -- # IFS=.-: 00:06:31.550 05:08:47 -- scripts/common.sh@335 -- # read -ra ver1 00:06:31.550 05:08:47 -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.550 05:08:47 -- scripts/common.sh@336 -- # read -ra ver2 00:06:31.550 05:08:47 -- scripts/common.sh@337 -- # local 'op=<' 00:06:31.550 05:08:47 -- scripts/common.sh@339 -- # ver1_l=2 00:06:31.550 05:08:47 -- scripts/common.sh@340 -- # ver2_l=1 00:06:31.550 05:08:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:31.550 05:08:47 -- scripts/common.sh@343 -- # case "$op" in 00:06:31.550 05:08:47 -- scripts/common.sh@344 -- # : 1 00:06:31.550 05:08:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:31.550 05:08:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.550 05:08:47 -- scripts/common.sh@364 -- # decimal 1 00:06:31.550 05:08:47 -- scripts/common.sh@352 -- # local d=1 00:06:31.550 05:08:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.550 05:08:47 -- scripts/common.sh@354 -- # echo 1 00:06:31.550 05:08:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:31.550 05:08:47 -- scripts/common.sh@365 -- # decimal 2 00:06:31.550 05:08:47 -- scripts/common.sh@352 -- # local d=2 00:06:31.550 05:08:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.550 05:08:47 -- scripts/common.sh@354 -- # echo 2 00:06:31.550 05:08:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:31.550 05:08:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:31.550 05:08:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:31.550 05:08:47 -- scripts/common.sh@367 -- # return 0 00:06:31.550 05:08:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.550 05:08:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:31.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.550 --rc genhtml_branch_coverage=1 00:06:31.550 --rc genhtml_function_coverage=1 00:06:31.550 --rc genhtml_legend=1 00:06:31.550 --rc geninfo_all_blocks=1 00:06:31.550 --rc geninfo_unexecuted_blocks=1 00:06:31.550 00:06:31.550 ' 00:06:31.550 05:08:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:31.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.550 --rc genhtml_branch_coverage=1 00:06:31.550 --rc genhtml_function_coverage=1 00:06:31.550 --rc genhtml_legend=1 00:06:31.550 --rc geninfo_all_blocks=1 00:06:31.550 --rc geninfo_unexecuted_blocks=1 00:06:31.550 00:06:31.550 ' 00:06:31.550 05:08:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:31.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.550 --rc genhtml_branch_coverage=1 00:06:31.550 --rc genhtml_function_coverage=1 00:06:31.550 --rc genhtml_legend=1 00:06:31.550 --rc geninfo_all_blocks=1 00:06:31.550 --rc geninfo_unexecuted_blocks=1 00:06:31.550 00:06:31.550 ' 00:06:31.550 05:08:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:31.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.550 --rc genhtml_branch_coverage=1 00:06:31.550 --rc genhtml_function_coverage=1 00:06:31.550 --rc genhtml_legend=1 00:06:31.550 --rc geninfo_all_blocks=1 00:06:31.550 --rc geninfo_unexecuted_blocks=1 00:06:31.550 00:06:31.550 ' 00:06:31.550 05:08:47 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:31.550 05:08:47 -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.550 05:08:47 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.550 05:08:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:31.550 05:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.550 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:06:31.550 ************************************ 00:06:31.550 START TEST event_perf 00:06:31.550 ************************************ 00:06:31.550 05:08:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.550 Running I/O for 1 seconds...[2024-11-19 05:08:47.995920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.550 [2024-11-19 05:08:47.996010] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647713 ] 00:06:31.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.550 [2024-11-19 05:08:48.069271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.550 [2024-11-19 05:08:48.108033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.550 [2024-11-19 05:08:48.108128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.550 [2024-11-19 05:08:48.108217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.550 [2024-11-19 05:08:48.108218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.929 Running I/O for 1 seconds... 00:06:32.929 lcore 0: 219829 00:06:32.929 lcore 1: 219828 00:06:32.929 lcore 2: 219828 00:06:32.929 lcore 3: 219829 00:06:32.929 done. 00:06:32.929 00:06:32.929 real 0m1.192s 00:06:32.929 user 0m4.100s 00:06:32.929 sys 0m0.091s 00:06:32.929 05:08:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.929 05:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:32.929 ************************************ 00:06:32.929 END TEST event_perf 00:06:32.929 ************************************ 00:06:32.929 05:08:49 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.929 05:08:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:32.929 05:08:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.929 05:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:32.929 ************************************ 00:06:32.929 START TEST event_reactor 00:06:32.929 ************************************ 00:06:32.929 05:08:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.929 [2024-11-19 05:08:49.239348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.929 [2024-11-19 05:08:49.239441] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647865 ] 00:06:32.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.929 [2024-11-19 05:08:49.312524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.929 [2024-11-19 05:08:49.349263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.867 test_start 00:06:33.867 oneshot 00:06:33.867 tick 100 00:06:33.867 tick 100 00:06:33.867 tick 250 00:06:33.867 tick 100 00:06:33.867 tick 100 00:06:33.867 tick 250 00:06:33.867 tick 100 00:06:33.867 tick 500 00:06:33.867 tick 100 00:06:33.867 tick 100 00:06:33.867 tick 250 00:06:33.867 tick 100 00:06:33.867 tick 100 00:06:33.867 test_end 00:06:33.867 00:06:33.867 real 0m1.195s 00:06:33.867 user 0m1.100s 00:06:33.867 sys 0m0.090s 00:06:33.867 05:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.867 05:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:33.867 ************************************ 00:06:33.867 END TEST event_reactor 00:06:33.867 ************************************ 00:06:34.127 05:08:50 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.127 05:08:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:34.127 05:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.127 05:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:34.127 ************************************ 00:06:34.127 START TEST event_reactor_perf 00:06:34.127 ************************************ 00:06:34.127 05:08:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.127 [2024-11-19 05:08:50.478410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.127 [2024-11-19 05:08:50.478498] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648118 ] 00:06:34.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.127 [2024-11-19 05:08:50.550844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.127 [2024-11-19 05:08:50.585437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.507 test_start 00:06:35.507 test_end 00:06:35.507 Performance: 499664 events per second 00:06:35.507 00:06:35.507 real 0m1.191s 00:06:35.507 user 0m1.102s 00:06:35.507 sys 0m0.085s 00:06:35.507 05:08:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.507 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:35.507 ************************************ 00:06:35.507 END TEST event_reactor_perf 00:06:35.507 ************************************ 00:06:35.507 05:08:51 -- event/event.sh@49 -- # uname -s 00:06:35.507 05:08:51 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:35.507 05:08:51 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:35.507 05:08:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.507 05:08:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.507 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:35.507 ************************************ 00:06:35.507 START TEST event_scheduler 00:06:35.507 ************************************ 00:06:35.507 05:08:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:35.507 * Looking for test storage... 00:06:35.507 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:35.507 05:08:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:35.507 05:08:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:35.507 05:08:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:35.507 05:08:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:35.507 05:08:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:35.507 05:08:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:35.507 05:08:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:35.507 05:08:51 -- scripts/common.sh@335 -- # IFS=.-: 00:06:35.507 05:08:51 -- scripts/common.sh@335 -- # read -ra ver1 00:06:35.507 05:08:51 -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.507 05:08:51 -- scripts/common.sh@336 -- # read -ra ver2 00:06:35.507 05:08:51 -- scripts/common.sh@337 -- # local 'op=<' 00:06:35.507 05:08:51 -- scripts/common.sh@339 -- # ver1_l=2 00:06:35.507 05:08:51 -- scripts/common.sh@340 -- # ver2_l=1 00:06:35.507 05:08:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:35.507 05:08:51 -- scripts/common.sh@343 -- # case "$op" in 00:06:35.507 05:08:51 -- scripts/common.sh@344 -- # : 1 00:06:35.507 05:08:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:35.507 05:08:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.507 05:08:51 -- scripts/common.sh@364 -- # decimal 1 00:06:35.507 05:08:51 -- scripts/common.sh@352 -- # local d=1 00:06:35.507 05:08:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.507 05:08:51 -- scripts/common.sh@354 -- # echo 1 00:06:35.507 05:08:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:35.507 05:08:51 -- scripts/common.sh@365 -- # decimal 2 00:06:35.507 05:08:51 -- scripts/common.sh@352 -- # local d=2 00:06:35.507 05:08:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.507 05:08:51 -- scripts/common.sh@354 -- # echo 2 00:06:35.507 05:08:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:35.507 05:08:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:35.507 05:08:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:35.507 05:08:51 -- scripts/common.sh@367 -- # return 0 00:06:35.507 05:08:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.508 05:08:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:35.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.508 --rc genhtml_branch_coverage=1 00:06:35.508 --rc genhtml_function_coverage=1 00:06:35.508 --rc genhtml_legend=1 00:06:35.508 --rc geninfo_all_blocks=1 00:06:35.508 --rc geninfo_unexecuted_blocks=1 00:06:35.508 00:06:35.508 ' 00:06:35.508 05:08:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:35.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.508 --rc genhtml_branch_coverage=1 00:06:35.508 --rc genhtml_function_coverage=1 00:06:35.508 --rc genhtml_legend=1 00:06:35.508 --rc geninfo_all_blocks=1 00:06:35.508 --rc geninfo_unexecuted_blocks=1 00:06:35.508 00:06:35.508 ' 00:06:35.508 05:08:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:35.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.508 --rc genhtml_branch_coverage=1 00:06:35.508 --rc genhtml_function_coverage=1 00:06:35.508 --rc genhtml_legend=1 00:06:35.508 --rc geninfo_all_blocks=1 00:06:35.508 --rc geninfo_unexecuted_blocks=1 00:06:35.508 00:06:35.508 ' 00:06:35.508 05:08:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:35.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.508 --rc genhtml_branch_coverage=1 00:06:35.508 --rc genhtml_function_coverage=1 00:06:35.508 --rc genhtml_legend=1 00:06:35.508 --rc geninfo_all_blocks=1 00:06:35.508 --rc geninfo_unexecuted_blocks=1 00:06:35.508 00:06:35.508 ' 00:06:35.508 05:08:51 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:35.508 05:08:51 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1648439 00:06:35.508 05:08:51 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.508 05:08:51 -- scheduler/scheduler.sh@37 -- # waitforlisten 1648439 00:06:35.508 05:08:51 -- common/autotest_common.sh@829 -- # '[' -z 1648439 ']' 00:06:35.508 05:08:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.508 05:08:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.508 05:08:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.508 05:08:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.508 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:35.508 05:08:51 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:35.508 [2024-11-19 05:08:51.917644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.508 [2024-11-19 05:08:51.917699] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648439 ] 00:06:35.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.508 [2024-11-19 05:08:51.982733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.508 [2024-11-19 05:08:52.021987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.508 [2024-11-19 05:08:52.022070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.508 [2024-11-19 05:08:52.022158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.508 [2024-11-19 05:08:52.022160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.508 05:08:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.508 05:08:52 -- common/autotest_common.sh@862 -- # return 0 00:06:35.508 05:08:52 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:35.508 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.508 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.508 POWER: Env isn't set yet! 00:06:35.508 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:35.508 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.508 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.508 POWER: Attempting to initialise PSTAT power management... 00:06:35.767 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:35.767 POWER: Initialized successfully for lcore 0 power management 00:06:35.767 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:35.767 POWER: Initialized successfully for lcore 1 power management 00:06:35.767 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:35.767 POWER: Initialized successfully for lcore 2 power management 00:06:35.767 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:35.767 POWER: Initialized successfully for lcore 3 power management 00:06:35.767 [2024-11-19 05:08:52.097787] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:35.767 [2024-11-19 05:08:52.097802] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:35.767 [2024-11-19 05:08:52.097812] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 [2024-11-19 05:08:52.161119] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:35.767 05:08:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.767 05:08:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 ************************************ 00:06:35.767 START TEST scheduler_create_thread 00:06:35.767 ************************************ 00:06:35.767 05:08:52 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 2 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 3 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 4 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 5 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 6 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 7 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 8 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 9 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 10 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.767 05:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:35.767 05:08:52 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:35.767 05:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.767 05:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:36.703 05:08:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.703 05:08:53 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:36.703 05:08:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.703 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.082 05:08:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.082 05:08:54 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:38.082 05:08:54 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:38.082 05:08:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.082 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:06:39.020 05:08:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.020 00:06:39.020 real 0m3.382s 00:06:39.020 user 0m0.021s 00:06:39.020 sys 0m0.009s 00:06:39.020 05:08:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.020 05:08:55 -- common/autotest_common.sh@10 -- # set +x 00:06:39.020 ************************************ 00:06:39.020 END TEST scheduler_create_thread 00:06:39.020 ************************************ 00:06:39.279 05:08:55 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:39.279 05:08:55 -- scheduler/scheduler.sh@46 -- # killprocess 1648439 00:06:39.279 05:08:55 -- common/autotest_common.sh@936 -- # '[' -z 1648439 ']' 00:06:39.279 05:08:55 -- common/autotest_common.sh@940 -- # kill -0 1648439 00:06:39.279 05:08:55 -- common/autotest_common.sh@941 -- # uname 00:06:39.279 05:08:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.279 05:08:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1648439 00:06:39.279 05:08:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:39.279 05:08:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:39.279 05:08:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1648439' 00:06:39.279 killing process with pid 1648439 00:06:39.279 05:08:55 -- common/autotest_common.sh@955 -- # kill 1648439 00:06:39.279 05:08:55 -- common/autotest_common.sh@960 -- # wait 1648439 00:06:39.537 [2024-11-19 05:08:55.932994] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:39.537 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:39.538 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:39.538 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:39.538 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:39.538 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:39.538 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:39.538 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:39.538 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:39.797 00:06:39.797 real 0m4.455s 00:06:39.797 user 0m7.753s 00:06:39.797 sys 0m0.388s 00:06:39.797 05:08:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.797 05:08:56 -- common/autotest_common.sh@10 -- # set +x 00:06:39.797 ************************************ 00:06:39.798 END TEST event_scheduler 00:06:39.798 ************************************ 00:06:39.798 05:08:56 -- event/event.sh@51 -- # modprobe -n nbd 00:06:39.798 05:08:56 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:39.798 05:08:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.798 05:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.798 05:08:56 -- common/autotest_common.sh@10 -- # set +x 00:06:39.798 ************************************ 00:06:39.798 START TEST app_repeat 00:06:39.798 ************************************ 00:06:39.798 05:08:56 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:39.798 05:08:56 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.798 05:08:56 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.798 05:08:56 -- event/event.sh@13 -- # local nbd_list 00:06:39.798 05:08:56 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.798 05:08:56 -- event/event.sh@14 -- # local bdev_list 00:06:39.798 05:08:56 -- event/event.sh@15 -- # local repeat_times=4 00:06:39.798 05:08:56 -- event/event.sh@17 -- # modprobe nbd 00:06:39.798 05:08:56 -- event/event.sh@19 -- # repeat_pid=1649296 00:06:39.798 05:08:56 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.798 05:08:56 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:39.798 05:08:56 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1649296' 00:06:39.798 Process app_repeat pid: 1649296 00:06:39.798 05:08:56 -- event/event.sh@23 -- # for i in {0..2} 00:06:39.798 05:08:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:39.798 spdk_app_start Round 0 00:06:39.798 05:08:56 -- event/event.sh@25 -- # waitforlisten 1649296 /var/tmp/spdk-nbd.sock 00:06:39.798 05:08:56 -- common/autotest_common.sh@829 -- # '[' -z 1649296 ']' 00:06:39.798 05:08:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.798 05:08:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.798 05:08:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.798 05:08:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.798 05:08:56 -- common/autotest_common.sh@10 -- # set +x 00:06:39.798 [2024-11-19 05:08:56.241469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.798 [2024-11-19 05:08:56.241544] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649296 ] 00:06:39.798 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.798 [2024-11-19 05:08:56.310847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.798 [2024-11-19 05:08:56.348670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.798 [2024-11-19 05:08:56.348673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.736 05:08:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.736 05:08:57 -- common/autotest_common.sh@862 -- # return 0 00:06:40.736 05:08:57 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.736 Malloc0 00:06:40.736 05:08:57 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.996 Malloc1 00:06:40.996 05:08:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@12 -- # local i 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.996 05:08:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.256 /dev/nbd0 00:06:41.256 05:08:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.256 05:08:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.256 05:08:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:41.256 05:08:57 -- common/autotest_common.sh@867 -- # local i 00:06:41.256 05:08:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.256 05:08:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.256 05:08:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:41.256 05:08:57 -- common/autotest_common.sh@871 -- # break 00:06:41.256 05:08:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.256 05:08:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.256 05:08:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.256 1+0 records in 00:06:41.256 1+0 records out 00:06:41.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230208 s, 17.8 MB/s 00:06:41.256 05:08:57 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.256 05:08:57 -- common/autotest_common.sh@884 -- # size=4096 00:06:41.256 05:08:57 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.256 05:08:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.256 05:08:57 -- common/autotest_common.sh@887 -- # return 0 00:06:41.256 05:08:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.256 05:08:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.256 05:08:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.516 /dev/nbd1 00:06:41.516 05:08:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.516 05:08:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.516 05:08:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:41.516 05:08:57 -- common/autotest_common.sh@867 -- # local i 00:06:41.516 05:08:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.516 05:08:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.516 05:08:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:41.516 05:08:57 -- common/autotest_common.sh@871 -- # break 00:06:41.516 05:08:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.516 05:08:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.516 05:08:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.516 1+0 records in 00:06:41.516 1+0 records out 00:06:41.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253087 s, 16.2 MB/s 00:06:41.516 05:08:57 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.516 05:08:57 -- common/autotest_common.sh@884 -- # size=4096 00:06:41.516 05:08:57 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.516 05:08:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.516 05:08:57 -- common/autotest_common.sh@887 -- # return 0 00:06:41.516 05:08:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.516 05:08:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.516 05:08:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.516 05:08:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.516 05:08:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.775 05:08:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.775 { 00:06:41.775 "nbd_device": "/dev/nbd0", 00:06:41.775 "bdev_name": "Malloc0" 00:06:41.775 }, 00:06:41.775 { 00:06:41.775 "nbd_device": "/dev/nbd1", 00:06:41.775 "bdev_name": "Malloc1" 00:06:41.775 } 00:06:41.775 ]' 00:06:41.775 05:08:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.775 { 00:06:41.775 "nbd_device": "/dev/nbd0", 00:06:41.775 "bdev_name": "Malloc0" 00:06:41.775 }, 00:06:41.776 { 00:06:41.776 "nbd_device": "/dev/nbd1", 00:06:41.776 "bdev_name": "Malloc1" 00:06:41.776 } 00:06:41.776 ]' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.776 /dev/nbd1' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.776 /dev/nbd1' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.776 256+0 records in 00:06:41.776 256+0 records out 00:06:41.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011476 s, 91.4 MB/s 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.776 256+0 records in 00:06:41.776 256+0 records out 00:06:41.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189556 s, 55.3 MB/s 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.776 256+0 records in 00:06:41.776 256+0 records out 00:06:41.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204414 s, 51.3 MB/s 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@51 -- # local i 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.776 05:08:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@41 -- # break 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.035 05:08:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@41 -- # break 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.295 05:08:58 -- bdev/nbd_common.sh@65 -- # true 00:06:42.555 05:08:58 -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.555 05:08:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.555 05:08:58 -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.555 05:08:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.555 05:08:58 -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.555 05:08:58 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.555 05:08:59 -- event/event.sh@35 -- # sleep 3 00:06:42.818 [2024-11-19 05:08:59.224635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.819 [2024-11-19 05:08:59.256827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.819 [2024-11-19 05:08:59.256830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.819 [2024-11-19 05:08:59.297818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.819 [2024-11-19 05:08:59.297865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.229 05:09:02 -- event/event.sh@23 -- # for i in {0..2} 00:06:46.229 05:09:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:46.229 spdk_app_start Round 1 00:06:46.229 05:09:02 -- event/event.sh@25 -- # waitforlisten 1649296 /var/tmp/spdk-nbd.sock 00:06:46.229 05:09:02 -- common/autotest_common.sh@829 -- # '[' -z 1649296 ']' 00:06:46.229 05:09:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.229 05:09:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.229 05:09:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.229 05:09:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.229 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.229 05:09:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.229 05:09:02 -- common/autotest_common.sh@862 -- # return 0 00:06:46.229 05:09:02 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.229 Malloc0 00:06:46.229 05:09:02 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.229 Malloc1 00:06:46.229 05:09:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@12 -- # local i 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.229 05:09:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.229 /dev/nbd0 00:06:46.489 05:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.489 05:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.489 05:09:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:46.489 05:09:02 -- common/autotest_common.sh@867 -- # local i 00:06:46.489 05:09:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.489 05:09:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.489 05:09:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:46.489 05:09:02 -- common/autotest_common.sh@871 -- # break 00:06:46.489 05:09:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.489 05:09:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.489 05:09:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.489 1+0 records in 00:06:46.489 1+0 records out 00:06:46.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263409 s, 15.5 MB/s 00:06:46.489 05:09:02 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:46.489 05:09:02 -- common/autotest_common.sh@884 -- # size=4096 00:06:46.489 05:09:02 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:46.489 05:09:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.489 05:09:02 -- common/autotest_common.sh@887 -- # return 0 00:06:46.489 05:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.489 05:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.489 05:09:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.489 /dev/nbd1 00:06:46.489 05:09:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.489 05:09:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.489 05:09:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:46.489 05:09:03 -- common/autotest_common.sh@867 -- # local i 00:06:46.489 05:09:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.489 05:09:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.489 05:09:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:46.489 05:09:03 -- common/autotest_common.sh@871 -- # break 00:06:46.489 05:09:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.489 05:09:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.489 05:09:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.489 1+0 records in 00:06:46.489 1+0 records out 00:06:46.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00012556 s, 32.6 MB/s 00:06:46.489 05:09:03 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:46.489 05:09:03 -- common/autotest_common.sh@884 -- # size=4096 00:06:46.489 05:09:03 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:46.489 05:09:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.489 05:09:03 -- common/autotest_common.sh@887 -- # return 0 00:06:46.489 05:09:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.489 05:09:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.489 05:09:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.489 05:09:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.489 05:09:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.748 { 00:06:46.748 "nbd_device": "/dev/nbd0", 00:06:46.748 "bdev_name": "Malloc0" 00:06:46.748 }, 00:06:46.748 { 00:06:46.748 "nbd_device": "/dev/nbd1", 00:06:46.748 "bdev_name": "Malloc1" 00:06:46.748 } 00:06:46.748 ]' 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.748 { 00:06:46.748 "nbd_device": "/dev/nbd0", 00:06:46.748 "bdev_name": "Malloc0" 00:06:46.748 }, 00:06:46.748 { 00:06:46.748 "nbd_device": "/dev/nbd1", 00:06:46.748 "bdev_name": "Malloc1" 00:06:46.748 } 00:06:46.748 ]' 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.748 /dev/nbd1' 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.748 /dev/nbd1' 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.748 05:09:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.749 256+0 records in 00:06:46.749 256+0 records out 00:06:46.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106303 s, 98.6 MB/s 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.749 05:09:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.009 256+0 records in 00:06:47.009 256+0 records out 00:06:47.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193381 s, 54.2 MB/s 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.009 256+0 records in 00:06:47.009 256+0 records out 00:06:47.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202979 s, 51.7 MB/s 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@51 -- # local i 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.009 05:09:03 -- bdev/nbd_common.sh@41 -- # break 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@41 -- # break 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.269 05:09:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.528 05:09:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.528 05:09:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.528 05:09:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.528 05:09:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.528 05:09:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.528 05:09:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.528 05:09:03 -- bdev/nbd_common.sh@65 -- # true 00:06:47.528 05:09:04 -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.528 05:09:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.528 05:09:04 -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.528 05:09:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.528 05:09:04 -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.528 05:09:04 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.787 05:09:04 -- event/event.sh@35 -- # sleep 3 00:06:48.047 [2024-11-19 05:09:04.373420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.047 [2024-11-19 05:09:04.406283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.047 [2024-11-19 05:09:04.406285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.047 [2024-11-19 05:09:04.447336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.047 [2024-11-19 05:09:04.447384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.341 05:09:07 -- event/event.sh@23 -- # for i in {0..2} 00:06:51.341 05:09:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:51.341 spdk_app_start Round 2 00:06:51.341 05:09:07 -- event/event.sh@25 -- # waitforlisten 1649296 /var/tmp/spdk-nbd.sock 00:06:51.341 05:09:07 -- common/autotest_common.sh@829 -- # '[' -z 1649296 ']' 00:06:51.341 05:09:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.341 05:09:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.341 05:09:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.341 05:09:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.341 05:09:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.341 05:09:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.341 05:09:07 -- common/autotest_common.sh@862 -- # return 0 00:06:51.341 05:09:07 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.341 Malloc0 00:06:51.341 05:09:07 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.341 Malloc1 00:06:51.341 05:09:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@12 -- # local i 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.341 05:09:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.601 /dev/nbd0 00:06:51.601 05:09:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.601 05:09:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.601 05:09:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.601 05:09:07 -- common/autotest_common.sh@867 -- # local i 00:06:51.601 05:09:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.601 05:09:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.601 05:09:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.601 05:09:07 -- common/autotest_common.sh@871 -- # break 00:06:51.601 05:09:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.601 05:09:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.601 05:09:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.601 1+0 records in 00:06:51.601 1+0 records out 00:06:51.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235101 s, 17.4 MB/s 00:06:51.601 05:09:07 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:51.601 05:09:07 -- common/autotest_common.sh@884 -- # size=4096 00:06:51.601 05:09:07 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:51.601 05:09:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.601 05:09:07 -- common/autotest_common.sh@887 -- # return 0 00:06:51.601 05:09:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.601 05:09:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.601 05:09:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.601 /dev/nbd1 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.860 05:09:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:51.860 05:09:08 -- common/autotest_common.sh@867 -- # local i 00:06:51.860 05:09:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.860 05:09:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.860 05:09:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:51.860 05:09:08 -- common/autotest_common.sh@871 -- # break 00:06:51.860 05:09:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.860 05:09:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.860 05:09:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.860 1+0 records in 00:06:51.860 1+0 records out 00:06:51.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261223 s, 15.7 MB/s 00:06:51.860 05:09:08 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:51.860 05:09:08 -- common/autotest_common.sh@884 -- # size=4096 00:06:51.860 05:09:08 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:51.860 05:09:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.860 05:09:08 -- common/autotest_common.sh@887 -- # return 0 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.860 { 00:06:51.860 "nbd_device": "/dev/nbd0", 00:06:51.860 "bdev_name": "Malloc0" 00:06:51.860 }, 00:06:51.860 { 00:06:51.860 "nbd_device": "/dev/nbd1", 00:06:51.860 "bdev_name": "Malloc1" 00:06:51.860 } 00:06:51.860 ]' 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.860 { 00:06:51.860 "nbd_device": "/dev/nbd0", 00:06:51.860 "bdev_name": "Malloc0" 00:06:51.860 }, 00:06:51.860 { 00:06:51.860 "nbd_device": "/dev/nbd1", 00:06:51.860 "bdev_name": "Malloc1" 00:06:51.860 } 00:06:51.860 ]' 00:06:51.860 05:09:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.120 /dev/nbd1' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.120 /dev/nbd1' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.120 256+0 records in 00:06:52.120 256+0 records out 00:06:52.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115705 s, 90.6 MB/s 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.120 256+0 records in 00:06:52.120 256+0 records out 00:06:52.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019432 s, 54.0 MB/s 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.120 256+0 records in 00:06:52.120 256+0 records out 00:06:52.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203717 s, 51.5 MB/s 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@51 -- # local i 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.120 05:09:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@41 -- # break 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@41 -- # break 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.380 05:09:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@65 -- # true 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.640 05:09:09 -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.640 05:09:09 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.899 05:09:09 -- event/event.sh@35 -- # sleep 3 00:06:53.158 [2024-11-19 05:09:09.509982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.158 [2024-11-19 05:09:09.543255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.159 [2024-11-19 05:09:09.543257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.159 [2024-11-19 05:09:09.584402] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.159 [2024-11-19 05:09:09.584449] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.447 05:09:12 -- event/event.sh@38 -- # waitforlisten 1649296 /var/tmp/spdk-nbd.sock 00:06:56.447 05:09:12 -- common/autotest_common.sh@829 -- # '[' -z 1649296 ']' 00:06:56.447 05:09:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.447 05:09:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.447 05:09:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.447 05:09:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.447 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.447 05:09:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.447 05:09:12 -- common/autotest_common.sh@862 -- # return 0 00:06:56.447 05:09:12 -- event/event.sh@39 -- # killprocess 1649296 00:06:56.447 05:09:12 -- common/autotest_common.sh@936 -- # '[' -z 1649296 ']' 00:06:56.447 05:09:12 -- common/autotest_common.sh@940 -- # kill -0 1649296 00:06:56.447 05:09:12 -- common/autotest_common.sh@941 -- # uname 00:06:56.447 05:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.447 05:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1649296 00:06:56.447 05:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.447 05:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.447 05:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1649296' 00:06:56.447 killing process with pid 1649296 00:06:56.447 05:09:12 -- common/autotest_common.sh@955 -- # kill 1649296 00:06:56.447 05:09:12 -- common/autotest_common.sh@960 -- # wait 1649296 00:06:56.447 spdk_app_start is called in Round 0. 00:06:56.447 Shutdown signal received, stop current app iteration 00:06:56.447 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:56.447 spdk_app_start is called in Round 1. 00:06:56.447 Shutdown signal received, stop current app iteration 00:06:56.447 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:56.447 spdk_app_start is called in Round 2. 00:06:56.447 Shutdown signal received, stop current app iteration 00:06:56.447 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:56.447 spdk_app_start is called in Round 3. 00:06:56.447 Shutdown signal received, stop current app iteration 00:06:56.447 05:09:12 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:56.447 05:09:12 -- event/event.sh@42 -- # return 0 00:06:56.447 00:06:56.447 real 0m16.530s 00:06:56.447 user 0m35.588s 00:06:56.447 sys 0m2.956s 00:06:56.448 05:09:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.448 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.448 ************************************ 00:06:56.448 END TEST app_repeat 00:06:56.448 ************************************ 00:06:56.448 05:09:12 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:56.448 05:09:12 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:56.448 05:09:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.448 05:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.448 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.448 ************************************ 00:06:56.448 START TEST cpu_locks 00:06:56.448 ************************************ 00:06:56.448 05:09:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:56.448 * Looking for test storage... 00:06:56.448 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:56.448 05:09:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:56.448 05:09:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:56.448 05:09:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:56.448 05:09:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:56.448 05:09:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:56.448 05:09:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:56.448 05:09:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:56.448 05:09:12 -- scripts/common.sh@335 -- # IFS=.-: 00:06:56.448 05:09:12 -- scripts/common.sh@335 -- # read -ra ver1 00:06:56.448 05:09:12 -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.448 05:09:12 -- scripts/common.sh@336 -- # read -ra ver2 00:06:56.448 05:09:12 -- scripts/common.sh@337 -- # local 'op=<' 00:06:56.448 05:09:12 -- scripts/common.sh@339 -- # ver1_l=2 00:06:56.448 05:09:12 -- scripts/common.sh@340 -- # ver2_l=1 00:06:56.448 05:09:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:56.448 05:09:12 -- scripts/common.sh@343 -- # case "$op" in 00:06:56.448 05:09:12 -- scripts/common.sh@344 -- # : 1 00:06:56.448 05:09:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:56.448 05:09:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.448 05:09:12 -- scripts/common.sh@364 -- # decimal 1 00:06:56.448 05:09:12 -- scripts/common.sh@352 -- # local d=1 00:06:56.448 05:09:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.448 05:09:12 -- scripts/common.sh@354 -- # echo 1 00:06:56.448 05:09:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:56.448 05:09:12 -- scripts/common.sh@365 -- # decimal 2 00:06:56.448 05:09:12 -- scripts/common.sh@352 -- # local d=2 00:06:56.448 05:09:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.448 05:09:12 -- scripts/common.sh@354 -- # echo 2 00:06:56.448 05:09:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:56.448 05:09:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:56.448 05:09:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:56.448 05:09:12 -- scripts/common.sh@367 -- # return 0 00:06:56.448 05:09:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.448 05:09:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.448 --rc genhtml_branch_coverage=1 00:06:56.448 --rc genhtml_function_coverage=1 00:06:56.448 --rc genhtml_legend=1 00:06:56.448 --rc geninfo_all_blocks=1 00:06:56.448 --rc geninfo_unexecuted_blocks=1 00:06:56.448 00:06:56.448 ' 00:06:56.448 05:09:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.448 --rc genhtml_branch_coverage=1 00:06:56.448 --rc genhtml_function_coverage=1 00:06:56.448 --rc genhtml_legend=1 00:06:56.448 --rc geninfo_all_blocks=1 00:06:56.448 --rc geninfo_unexecuted_blocks=1 00:06:56.448 00:06:56.448 ' 00:06:56.448 05:09:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.448 --rc genhtml_branch_coverage=1 00:06:56.448 --rc genhtml_function_coverage=1 00:06:56.448 --rc genhtml_legend=1 00:06:56.448 --rc geninfo_all_blocks=1 00:06:56.448 --rc geninfo_unexecuted_blocks=1 00:06:56.448 00:06:56.448 ' 00:06:56.448 05:09:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.448 --rc genhtml_branch_coverage=1 00:06:56.448 --rc genhtml_function_coverage=1 00:06:56.448 --rc genhtml_legend=1 00:06:56.448 --rc geninfo_all_blocks=1 00:06:56.448 --rc geninfo_unexecuted_blocks=1 00:06:56.448 00:06:56.448 ' 00:06:56.448 05:09:12 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.448 05:09:12 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.448 05:09:12 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.448 05:09:12 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.448 05:09:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.448 05:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.448 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.448 ************************************ 00:06:56.448 START TEST default_locks 00:06:56.448 ************************************ 00:06:56.448 05:09:12 -- common/autotest_common.sh@1114 -- # default_locks 00:06:56.448 05:09:12 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1653045 00:06:56.448 05:09:12 -- event/cpu_locks.sh@47 -- # waitforlisten 1653045 00:06:56.448 05:09:12 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.448 05:09:12 -- common/autotest_common.sh@829 -- # '[' -z 1653045 ']' 00:06:56.448 05:09:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.448 05:09:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.448 05:09:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.448 05:09:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.448 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.707 [2024-11-19 05:09:13.031460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.707 [2024-11-19 05:09:13.031513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653045 ] 00:06:56.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.707 [2024-11-19 05:09:13.101826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.707 [2024-11-19 05:09:13.137264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.707 [2024-11-19 05:09:13.137384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.275 05:09:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.275 05:09:13 -- common/autotest_common.sh@862 -- # return 0 00:06:57.275 05:09:13 -- event/cpu_locks.sh@49 -- # locks_exist 1653045 00:06:57.275 05:09:13 -- event/cpu_locks.sh@22 -- # lslocks -p 1653045 00:06:57.275 05:09:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.534 lslocks: write error 00:06:57.534 05:09:14 -- event/cpu_locks.sh@50 -- # killprocess 1653045 00:06:57.535 05:09:14 -- common/autotest_common.sh@936 -- # '[' -z 1653045 ']' 00:06:57.535 05:09:14 -- common/autotest_common.sh@940 -- # kill -0 1653045 00:06:57.535 05:09:14 -- common/autotest_common.sh@941 -- # uname 00:06:57.535 05:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.535 05:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653045 00:06:57.535 05:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.535 05:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.535 05:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653045' 00:06:57.535 killing process with pid 1653045 00:06:57.535 05:09:14 -- common/autotest_common.sh@955 -- # kill 1653045 00:06:57.535 05:09:14 -- common/autotest_common.sh@960 -- # wait 1653045 00:06:58.104 05:09:14 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1653045 00:06:58.104 05:09:14 -- common/autotest_common.sh@650 -- # local es=0 00:06:58.104 05:09:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1653045 00:06:58.104 05:09:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:58.104 05:09:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.104 05:09:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:58.104 05:09:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.104 05:09:14 -- common/autotest_common.sh@653 -- # waitforlisten 1653045 00:06:58.104 05:09:14 -- common/autotest_common.sh@829 -- # '[' -z 1653045 ']' 00:06:58.104 05:09:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.104 05:09:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.104 05:09:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.104 05:09:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.104 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.104 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1653045) - No such process 00:06:58.104 ERROR: process (pid: 1653045) is no longer running 00:06:58.104 05:09:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.104 05:09:14 -- common/autotest_common.sh@862 -- # return 1 00:06:58.104 05:09:14 -- common/autotest_common.sh@653 -- # es=1 00:06:58.104 05:09:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.104 05:09:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.104 05:09:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.104 05:09:14 -- event/cpu_locks.sh@54 -- # no_locks 00:06:58.104 05:09:14 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.104 05:09:14 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.104 05:09:14 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.104 00:06:58.104 real 0m1.407s 00:06:58.104 user 0m1.470s 00:06:58.104 sys 0m0.498s 00:06:58.104 05:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.104 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.104 ************************************ 00:06:58.104 END TEST default_locks 00:06:58.104 ************************************ 00:06:58.104 05:09:14 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:58.104 05:09:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.104 05:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.104 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.104 ************************************ 00:06:58.104 START TEST default_locks_via_rpc 00:06:58.104 ************************************ 00:06:58.104 05:09:14 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:58.104 05:09:14 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1653340 00:06:58.104 05:09:14 -- event/cpu_locks.sh@63 -- # waitforlisten 1653340 00:06:58.104 05:09:14 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.104 05:09:14 -- common/autotest_common.sh@829 -- # '[' -z 1653340 ']' 00:06:58.104 05:09:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.104 05:09:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.104 05:09:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.104 05:09:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.104 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.104 [2024-11-19 05:09:14.489308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.104 [2024-11-19 05:09:14.489368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653340 ] 00:06:58.104 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.104 [2024-11-19 05:09:14.558661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.104 [2024-11-19 05:09:14.591658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.104 [2024-11-19 05:09:14.591773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.042 05:09:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.042 05:09:15 -- common/autotest_common.sh@862 -- # return 0 00:06:59.042 05:09:15 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:59.042 05:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.042 05:09:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.042 05:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.042 05:09:15 -- event/cpu_locks.sh@67 -- # no_locks 00:06:59.042 05:09:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:59.042 05:09:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:59.042 05:09:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:59.042 05:09:15 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.042 05:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.042 05:09:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.042 05:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.042 05:09:15 -- event/cpu_locks.sh@71 -- # locks_exist 1653340 00:06:59.042 05:09:15 -- event/cpu_locks.sh@22 -- # lslocks -p 1653340 00:06:59.042 05:09:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.301 05:09:15 -- event/cpu_locks.sh@73 -- # killprocess 1653340 00:06:59.301 05:09:15 -- common/autotest_common.sh@936 -- # '[' -z 1653340 ']' 00:06:59.301 05:09:15 -- common/autotest_common.sh@940 -- # kill -0 1653340 00:06:59.301 05:09:15 -- common/autotest_common.sh@941 -- # uname 00:06:59.301 05:09:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:59.301 05:09:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653340 00:06:59.561 05:09:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:59.561 05:09:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:59.561 05:09:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653340' 00:06:59.561 killing process with pid 1653340 00:06:59.561 05:09:15 -- common/autotest_common.sh@955 -- # kill 1653340 00:06:59.561 05:09:15 -- common/autotest_common.sh@960 -- # wait 1653340 00:06:59.820 00:06:59.820 real 0m1.761s 00:06:59.820 user 0m1.871s 00:06:59.820 sys 0m0.609s 00:06:59.820 05:09:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.820 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:06:59.820 ************************************ 00:06:59.820 END TEST default_locks_via_rpc 00:06:59.820 ************************************ 00:06:59.820 05:09:16 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.820 05:09:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.820 05:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.820 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:06:59.820 ************************************ 00:06:59.820 START TEST non_locking_app_on_locked_coremask 00:06:59.820 ************************************ 00:06:59.820 05:09:16 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:59.820 05:09:16 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1653640 00:06:59.820 05:09:16 -- event/cpu_locks.sh@81 -- # waitforlisten 1653640 /var/tmp/spdk.sock 00:06:59.820 05:09:16 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.820 05:09:16 -- common/autotest_common.sh@829 -- # '[' -z 1653640 ']' 00:06:59.820 05:09:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.820 05:09:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.820 05:09:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.820 05:09:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.820 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:06:59.820 [2024-11-19 05:09:16.300017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.820 [2024-11-19 05:09:16.300071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653640 ] 00:06:59.820 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.820 [2024-11-19 05:09:16.371226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.079 [2024-11-19 05:09:16.407204] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.079 [2024-11-19 05:09:16.407328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.647 05:09:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.647 05:09:17 -- common/autotest_common.sh@862 -- # return 0 00:07:00.647 05:09:17 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1653809 00:07:00.647 05:09:17 -- event/cpu_locks.sh@85 -- # waitforlisten 1653809 /var/tmp/spdk2.sock 00:07:00.647 05:09:17 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.647 05:09:17 -- common/autotest_common.sh@829 -- # '[' -z 1653809 ']' 00:07:00.647 05:09:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.647 05:09:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.647 05:09:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.647 05:09:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.647 05:09:17 -- common/autotest_common.sh@10 -- # set +x 00:07:00.647 [2024-11-19 05:09:17.158966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.647 [2024-11-19 05:09:17.159019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653809 ] 00:07:00.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.906 [2024-11-19 05:09:17.255934] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.906 [2024-11-19 05:09:17.255970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.906 [2024-11-19 05:09:17.327696] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.906 [2024-11-19 05:09:17.327835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.475 05:09:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.475 05:09:17 -- common/autotest_common.sh@862 -- # return 0 00:07:01.475 05:09:17 -- event/cpu_locks.sh@87 -- # locks_exist 1653640 00:07:01.475 05:09:17 -- event/cpu_locks.sh@22 -- # lslocks -p 1653640 00:07:01.475 05:09:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.853 lslocks: write error 00:07:02.853 05:09:19 -- event/cpu_locks.sh@89 -- # killprocess 1653640 00:07:02.853 05:09:19 -- common/autotest_common.sh@936 -- # '[' -z 1653640 ']' 00:07:02.853 05:09:19 -- common/autotest_common.sh@940 -- # kill -0 1653640 00:07:02.853 05:09:19 -- common/autotest_common.sh@941 -- # uname 00:07:02.853 05:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.853 05:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653640 00:07:02.853 05:09:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.853 05:09:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.853 05:09:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653640' 00:07:02.853 killing process with pid 1653640 00:07:02.853 05:09:19 -- common/autotest_common.sh@955 -- # kill 1653640 00:07:02.853 05:09:19 -- common/autotest_common.sh@960 -- # wait 1653640 00:07:03.422 05:09:19 -- event/cpu_locks.sh@90 -- # killprocess 1653809 00:07:03.422 05:09:19 -- common/autotest_common.sh@936 -- # '[' -z 1653809 ']' 00:07:03.422 05:09:19 -- common/autotest_common.sh@940 -- # kill -0 1653809 00:07:03.422 05:09:19 -- common/autotest_common.sh@941 -- # uname 00:07:03.422 05:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.422 05:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653809 00:07:03.681 05:09:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.681 05:09:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.681 05:09:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653809' 00:07:03.681 killing process with pid 1653809 00:07:03.681 05:09:20 -- common/autotest_common.sh@955 -- # kill 1653809 00:07:03.681 05:09:20 -- common/autotest_common.sh@960 -- # wait 1653809 00:07:03.941 00:07:03.941 real 0m4.058s 00:07:03.941 user 0m4.378s 00:07:03.941 sys 0m1.339s 00:07:03.941 05:09:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.941 05:09:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.941 ************************************ 00:07:03.941 END TEST non_locking_app_on_locked_coremask 00:07:03.941 ************************************ 00:07:03.941 05:09:20 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:03.941 05:09:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.941 05:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.941 05:09:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.941 ************************************ 00:07:03.941 START TEST locking_app_on_unlocked_coremask 00:07:03.941 ************************************ 00:07:03.941 05:09:20 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:07:03.941 05:09:20 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1654480 00:07:03.941 05:09:20 -- event/cpu_locks.sh@99 -- # waitforlisten 1654480 /var/tmp/spdk.sock 00:07:03.941 05:09:20 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:03.941 05:09:20 -- common/autotest_common.sh@829 -- # '[' -z 1654480 ']' 00:07:03.941 05:09:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.941 05:09:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.941 05:09:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.941 05:09:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.941 05:09:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.941 [2024-11-19 05:09:20.406934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.941 [2024-11-19 05:09:20.407008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654480 ] 00:07:03.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.941 [2024-11-19 05:09:20.478759] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.941 [2024-11-19 05:09:20.478788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.200 [2024-11-19 05:09:20.513093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.200 [2024-11-19 05:09:20.513236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.770 05:09:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.770 05:09:21 -- common/autotest_common.sh@862 -- # return 0 00:07:04.770 05:09:21 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1654502 00:07:04.770 05:09:21 -- event/cpu_locks.sh@103 -- # waitforlisten 1654502 /var/tmp/spdk2.sock 00:07:04.770 05:09:21 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.770 05:09:21 -- common/autotest_common.sh@829 -- # '[' -z 1654502 ']' 00:07:04.770 05:09:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.770 05:09:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.770 05:09:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.770 05:09:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.770 05:09:21 -- common/autotest_common.sh@10 -- # set +x 00:07:04.770 [2024-11-19 05:09:21.261742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.770 [2024-11-19 05:09:21.261795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654502 ] 00:07:04.770 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.029 [2024-11-19 05:09:21.355021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.029 [2024-11-19 05:09:21.427413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.029 [2024-11-19 05:09:21.427561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.598 05:09:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.598 05:09:22 -- common/autotest_common.sh@862 -- # return 0 00:07:05.598 05:09:22 -- event/cpu_locks.sh@105 -- # locks_exist 1654502 00:07:05.598 05:09:22 -- event/cpu_locks.sh@22 -- # lslocks -p 1654502 00:07:05.598 05:09:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.536 lslocks: write error 00:07:06.536 05:09:23 -- event/cpu_locks.sh@107 -- # killprocess 1654480 00:07:06.536 05:09:23 -- common/autotest_common.sh@936 -- # '[' -z 1654480 ']' 00:07:06.536 05:09:23 -- common/autotest_common.sh@940 -- # kill -0 1654480 00:07:06.536 05:09:23 -- common/autotest_common.sh@941 -- # uname 00:07:06.536 05:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.536 05:09:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1654480 00:07:06.795 05:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.795 05:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.795 05:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1654480' 00:07:06.795 killing process with pid 1654480 00:07:06.795 05:09:23 -- common/autotest_common.sh@955 -- # kill 1654480 00:07:06.795 05:09:23 -- common/autotest_common.sh@960 -- # wait 1654480 00:07:07.364 05:09:23 -- event/cpu_locks.sh@108 -- # killprocess 1654502 00:07:07.364 05:09:23 -- common/autotest_common.sh@936 -- # '[' -z 1654502 ']' 00:07:07.364 05:09:23 -- common/autotest_common.sh@940 -- # kill -0 1654502 00:07:07.364 05:09:23 -- common/autotest_common.sh@941 -- # uname 00:07:07.364 05:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.364 05:09:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1654502 00:07:07.364 05:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.364 05:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.364 05:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1654502' 00:07:07.364 killing process with pid 1654502 00:07:07.364 05:09:23 -- common/autotest_common.sh@955 -- # kill 1654502 00:07:07.364 05:09:23 -- common/autotest_common.sh@960 -- # wait 1654502 00:07:07.623 00:07:07.623 real 0m3.703s 00:07:07.623 user 0m3.982s 00:07:07.623 sys 0m1.277s 00:07:07.623 05:09:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.623 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.623 ************************************ 00:07:07.623 END TEST locking_app_on_unlocked_coremask 00:07:07.623 ************************************ 00:07:07.623 05:09:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:07.623 05:09:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.623 05:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.623 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.623 ************************************ 00:07:07.623 START TEST locking_app_on_locked_coremask 00:07:07.623 ************************************ 00:07:07.623 05:09:24 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:07:07.623 05:09:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1655074 00:07:07.623 05:09:24 -- event/cpu_locks.sh@116 -- # waitforlisten 1655074 /var/tmp/spdk.sock 00:07:07.623 05:09:24 -- common/autotest_common.sh@829 -- # '[' -z 1655074 ']' 00:07:07.623 05:09:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.623 05:09:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.623 05:09:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.623 05:09:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.623 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.623 05:09:24 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.623 [2024-11-19 05:09:24.151137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.623 [2024-11-19 05:09:24.151189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655074 ] 00:07:07.623 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.882 [2024-11-19 05:09:24.220342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.882 [2024-11-19 05:09:24.256724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.882 [2024-11-19 05:09:24.256844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.451 05:09:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.451 05:09:24 -- common/autotest_common.sh@862 -- # return 0 00:07:08.451 05:09:24 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1655326 00:07:08.451 05:09:24 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1655326 /var/tmp/spdk2.sock 00:07:08.451 05:09:24 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:08.451 05:09:24 -- common/autotest_common.sh@650 -- # local es=0 00:07:08.451 05:09:24 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1655326 /var/tmp/spdk2.sock 00:07:08.451 05:09:24 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:08.451 05:09:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.451 05:09:24 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:08.451 05:09:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.451 05:09:24 -- common/autotest_common.sh@653 -- # waitforlisten 1655326 /var/tmp/spdk2.sock 00:07:08.451 05:09:24 -- common/autotest_common.sh@829 -- # '[' -z 1655326 ']' 00:07:08.451 05:09:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.451 05:09:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.451 05:09:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.451 05:09:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.451 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.451 [2024-11-19 05:09:25.002024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.451 [2024-11-19 05:09:25.002078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655326 ] 00:07:08.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.710 [2024-11-19 05:09:25.096855] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1655074 has claimed it. 00:07:08.710 [2024-11-19 05:09:25.096893] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:09.278 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1655326) - No such process 00:07:09.278 ERROR: process (pid: 1655326) is no longer running 00:07:09.278 05:09:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.278 05:09:25 -- common/autotest_common.sh@862 -- # return 1 00:07:09.278 05:09:25 -- common/autotest_common.sh@653 -- # es=1 00:07:09.278 05:09:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.278 05:09:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.278 05:09:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.278 05:09:25 -- event/cpu_locks.sh@122 -- # locks_exist 1655074 00:07:09.278 05:09:25 -- event/cpu_locks.sh@22 -- # lslocks -p 1655074 00:07:09.278 05:09:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.538 lslocks: write error 00:07:09.538 05:09:26 -- event/cpu_locks.sh@124 -- # killprocess 1655074 00:07:09.538 05:09:26 -- common/autotest_common.sh@936 -- # '[' -z 1655074 ']' 00:07:09.538 05:09:26 -- common/autotest_common.sh@940 -- # kill -0 1655074 00:07:09.538 05:09:26 -- common/autotest_common.sh@941 -- # uname 00:07:09.538 05:09:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.538 05:09:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1655074 00:07:09.538 05:09:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.538 05:09:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.538 05:09:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1655074' 00:07:09.538 killing process with pid 1655074 00:07:09.538 05:09:26 -- common/autotest_common.sh@955 -- # kill 1655074 00:07:09.538 05:09:26 -- common/autotest_common.sh@960 -- # wait 1655074 00:07:10.106 00:07:10.106 real 0m2.286s 00:07:10.106 user 0m2.532s 00:07:10.106 sys 0m0.662s 00:07:10.106 05:09:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.106 05:09:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.106 ************************************ 00:07:10.106 END TEST locking_app_on_locked_coremask 00:07:10.106 ************************************ 00:07:10.106 05:09:26 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:10.106 05:09:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.106 05:09:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.106 05:09:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.106 ************************************ 00:07:10.106 START TEST locking_overlapped_coremask 00:07:10.106 ************************************ 00:07:10.106 05:09:26 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:07:10.106 05:09:26 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1655590 00:07:10.106 05:09:26 -- event/cpu_locks.sh@133 -- # waitforlisten 1655590 /var/tmp/spdk.sock 00:07:10.106 05:09:26 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:10.106 05:09:26 -- common/autotest_common.sh@829 -- # '[' -z 1655590 ']' 00:07:10.106 05:09:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.107 05:09:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.107 05:09:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.107 05:09:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.107 05:09:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.107 [2024-11-19 05:09:26.486951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.107 [2024-11-19 05:09:26.487006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655590 ] 00:07:10.107 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.107 [2024-11-19 05:09:26.556092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.107 [2024-11-19 05:09:26.594438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.107 [2024-11-19 05:09:26.594591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.107 [2024-11-19 05:09:26.594709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.107 [2024-11-19 05:09:26.594711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.044 05:09:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.044 05:09:27 -- common/autotest_common.sh@862 -- # return 0 00:07:11.044 05:09:27 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1655655 00:07:11.044 05:09:27 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1655655 /var/tmp/spdk2.sock 00:07:11.044 05:09:27 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:11.044 05:09:27 -- common/autotest_common.sh@650 -- # local es=0 00:07:11.044 05:09:27 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1655655 /var/tmp/spdk2.sock 00:07:11.044 05:09:27 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:11.044 05:09:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.044 05:09:27 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:11.044 05:09:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.044 05:09:27 -- common/autotest_common.sh@653 -- # waitforlisten 1655655 /var/tmp/spdk2.sock 00:07:11.044 05:09:27 -- common/autotest_common.sh@829 -- # '[' -z 1655655 ']' 00:07:11.044 05:09:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.044 05:09:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.044 05:09:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.044 05:09:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.044 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:07:11.044 [2024-11-19 05:09:27.352491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.044 [2024-11-19 05:09:27.352548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655655 ] 00:07:11.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.044 [2024-11-19 05:09:27.450088] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1655590 has claimed it. 00:07:11.044 [2024-11-19 05:09:27.450125] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.613 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1655655) - No such process 00:07:11.613 ERROR: process (pid: 1655655) is no longer running 00:07:11.614 05:09:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.614 05:09:27 -- common/autotest_common.sh@862 -- # return 1 00:07:11.614 05:09:27 -- common/autotest_common.sh@653 -- # es=1 00:07:11.614 05:09:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.614 05:09:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.614 05:09:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.614 05:09:27 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:11.614 05:09:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.614 05:09:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.614 05:09:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.614 05:09:27 -- event/cpu_locks.sh@141 -- # killprocess 1655590 00:07:11.614 05:09:27 -- common/autotest_common.sh@936 -- # '[' -z 1655590 ']' 00:07:11.614 05:09:27 -- common/autotest_common.sh@940 -- # kill -0 1655590 00:07:11.614 05:09:27 -- common/autotest_common.sh@941 -- # uname 00:07:11.614 05:09:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.614 05:09:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1655590 00:07:11.614 05:09:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.614 05:09:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.614 05:09:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1655590' 00:07:11.614 killing process with pid 1655590 00:07:11.614 05:09:28 -- common/autotest_common.sh@955 -- # kill 1655590 00:07:11.614 05:09:28 -- common/autotest_common.sh@960 -- # wait 1655590 00:07:11.873 00:07:11.873 real 0m1.913s 00:07:11.873 user 0m5.486s 00:07:11.873 sys 0m0.442s 00:07:11.873 05:09:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.873 05:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:11.873 ************************************ 00:07:11.873 END TEST locking_overlapped_coremask 00:07:11.873 ************************************ 00:07:11.873 05:09:28 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:11.873 05:09:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:11.873 05:09:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.873 05:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:11.873 ************************************ 00:07:11.873 START TEST locking_overlapped_coremask_via_rpc 00:07:11.873 ************************************ 00:07:11.873 05:09:28 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:11.873 05:09:28 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1655947 00:07:11.873 05:09:28 -- event/cpu_locks.sh@149 -- # waitforlisten 1655947 /var/tmp/spdk.sock 00:07:11.874 05:09:28 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:11.874 05:09:28 -- common/autotest_common.sh@829 -- # '[' -z 1655947 ']' 00:07:11.874 05:09:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.874 05:09:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.874 05:09:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.874 05:09:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.874 05:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:12.133 [2024-11-19 05:09:28.455193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.133 [2024-11-19 05:09:28.455248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655947 ] 00:07:12.133 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.133 [2024-11-19 05:09:28.524821] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.133 [2024-11-19 05:09:28.524853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.133 [2024-11-19 05:09:28.558184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.133 [2024-11-19 05:09:28.558399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.133 [2024-11-19 05:09:28.558496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.133 [2024-11-19 05:09:28.558498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.071 05:09:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.071 05:09:29 -- common/autotest_common.sh@862 -- # return 0 00:07:13.071 05:09:29 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1656082 00:07:13.071 05:09:29 -- event/cpu_locks.sh@153 -- # waitforlisten 1656082 /var/tmp/spdk2.sock 00:07:13.071 05:09:29 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:13.072 05:09:29 -- common/autotest_common.sh@829 -- # '[' -z 1656082 ']' 00:07:13.072 05:09:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.072 05:09:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.072 05:09:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.072 05:09:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.072 05:09:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.072 [2024-11-19 05:09:29.366949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.072 [2024-11-19 05:09:29.367009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656082 ] 00:07:13.072 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.072 [2024-11-19 05:09:29.468955] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.072 [2024-11-19 05:09:29.468988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.072 [2024-11-19 05:09:29.542441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.072 [2024-11-19 05:09:29.542626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.072 [2024-11-19 05:09:29.542762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.072 [2024-11-19 05:09:29.542763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:13.641 05:09:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.641 05:09:30 -- common/autotest_common.sh@862 -- # return 0 00:07:13.641 05:09:30 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.641 05:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.641 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:13.901 05:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.901 05:09:30 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.901 05:09:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:13.901 05:09:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.901 05:09:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:13.901 05:09:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.901 05:09:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:13.901 05:09:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.901 05:09:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.901 05:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.901 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:13.901 [2024-11-19 05:09:30.220595] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1655947 has claimed it. 00:07:13.901 request: 00:07:13.901 { 00:07:13.901 "method": "framework_enable_cpumask_locks", 00:07:13.901 "req_id": 1 00:07:13.901 } 00:07:13.901 Got JSON-RPC error response 00:07:13.901 response: 00:07:13.901 { 00:07:13.901 "code": -32603, 00:07:13.901 "message": "Failed to claim CPU core: 2" 00:07:13.901 } 00:07:13.901 05:09:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:13.901 05:09:30 -- common/autotest_common.sh@653 -- # es=1 00:07:13.901 05:09:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.901 05:09:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.901 05:09:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.901 05:09:30 -- event/cpu_locks.sh@158 -- # waitforlisten 1655947 /var/tmp/spdk.sock 00:07:13.901 05:09:30 -- common/autotest_common.sh@829 -- # '[' -z 1655947 ']' 00:07:13.901 05:09:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.901 05:09:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.901 05:09:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.901 05:09:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.901 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:13.901 05:09:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.901 05:09:30 -- common/autotest_common.sh@862 -- # return 0 00:07:13.901 05:09:30 -- event/cpu_locks.sh@159 -- # waitforlisten 1656082 /var/tmp/spdk2.sock 00:07:13.901 05:09:30 -- common/autotest_common.sh@829 -- # '[' -z 1656082 ']' 00:07:13.901 05:09:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.901 05:09:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.901 05:09:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.901 05:09:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.901 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:14.161 05:09:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.161 05:09:30 -- common/autotest_common.sh@862 -- # return 0 00:07:14.161 05:09:30 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:14.161 05:09:30 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.161 05:09:30 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.161 05:09:30 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.161 00:07:14.161 real 0m2.199s 00:07:14.161 user 0m0.937s 00:07:14.161 sys 0m0.190s 00:07:14.161 05:09:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.161 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:14.161 ************************************ 00:07:14.161 END TEST locking_overlapped_coremask_via_rpc 00:07:14.161 ************************************ 00:07:14.161 05:09:30 -- event/cpu_locks.sh@174 -- # cleanup 00:07:14.161 05:09:30 -- event/cpu_locks.sh@15 -- # [[ -z 1655947 ]] 00:07:14.161 05:09:30 -- event/cpu_locks.sh@15 -- # killprocess 1655947 00:07:14.161 05:09:30 -- common/autotest_common.sh@936 -- # '[' -z 1655947 ']' 00:07:14.161 05:09:30 -- common/autotest_common.sh@940 -- # kill -0 1655947 00:07:14.161 05:09:30 -- common/autotest_common.sh@941 -- # uname 00:07:14.161 05:09:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.161 05:09:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1655947 00:07:14.161 05:09:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.161 05:09:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.161 05:09:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1655947' 00:07:14.161 killing process with pid 1655947 00:07:14.161 05:09:30 -- common/autotest_common.sh@955 -- # kill 1655947 00:07:14.161 05:09:30 -- common/autotest_common.sh@960 -- # wait 1655947 00:07:14.730 05:09:31 -- event/cpu_locks.sh@16 -- # [[ -z 1656082 ]] 00:07:14.730 05:09:31 -- event/cpu_locks.sh@16 -- # killprocess 1656082 00:07:14.730 05:09:31 -- common/autotest_common.sh@936 -- # '[' -z 1656082 ']' 00:07:14.730 05:09:31 -- common/autotest_common.sh@940 -- # kill -0 1656082 00:07:14.730 05:09:31 -- common/autotest_common.sh@941 -- # uname 00:07:14.730 05:09:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.730 05:09:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1656082 00:07:14.730 05:09:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:14.730 05:09:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:14.730 05:09:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1656082' 00:07:14.730 killing process with pid 1656082 00:07:14.730 05:09:31 -- common/autotest_common.sh@955 -- # kill 1656082 00:07:14.730 05:09:31 -- common/autotest_common.sh@960 -- # wait 1656082 00:07:14.990 05:09:31 -- event/cpu_locks.sh@18 -- # rm -f 00:07:14.990 05:09:31 -- event/cpu_locks.sh@1 -- # cleanup 00:07:14.990 05:09:31 -- event/cpu_locks.sh@15 -- # [[ -z 1655947 ]] 00:07:14.990 05:09:31 -- event/cpu_locks.sh@15 -- # killprocess 1655947 00:07:14.990 05:09:31 -- common/autotest_common.sh@936 -- # '[' -z 1655947 ']' 00:07:14.990 05:09:31 -- common/autotest_common.sh@940 -- # kill -0 1655947 00:07:14.990 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1655947) - No such process 00:07:14.990 05:09:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1655947 is not found' 00:07:14.990 Process with pid 1655947 is not found 00:07:14.990 05:09:31 -- event/cpu_locks.sh@16 -- # [[ -z 1656082 ]] 00:07:14.990 05:09:31 -- event/cpu_locks.sh@16 -- # killprocess 1656082 00:07:14.990 05:09:31 -- common/autotest_common.sh@936 -- # '[' -z 1656082 ']' 00:07:14.990 05:09:31 -- common/autotest_common.sh@940 -- # kill -0 1656082 00:07:14.990 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1656082) - No such process 00:07:14.990 05:09:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1656082 is not found' 00:07:14.990 Process with pid 1656082 is not found 00:07:14.990 05:09:31 -- event/cpu_locks.sh@18 -- # rm -f 00:07:14.990 00:07:14.990 real 0m18.609s 00:07:14.990 user 0m31.837s 00:07:14.990 sys 0m5.976s 00:07:14.990 05:09:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.990 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:14.990 ************************************ 00:07:14.990 END TEST cpu_locks 00:07:14.990 ************************************ 00:07:14.990 00:07:14.990 real 0m43.662s 00:07:14.990 user 1m21.680s 00:07:14.990 sys 0m9.941s 00:07:14.990 05:09:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.990 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:14.990 ************************************ 00:07:14.990 END TEST event 00:07:14.990 ************************************ 00:07:14.990 05:09:31 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:14.990 05:09:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.990 05:09:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.990 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:14.990 ************************************ 00:07:14.990 START TEST thread 00:07:14.990 ************************************ 00:07:14.990 05:09:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:15.249 * Looking for test storage... 00:07:15.250 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:15.250 05:09:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:15.250 05:09:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:15.250 05:09:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:15.250 05:09:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:15.250 05:09:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:15.250 05:09:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:15.250 05:09:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:15.250 05:09:31 -- scripts/common.sh@335 -- # IFS=.-: 00:07:15.250 05:09:31 -- scripts/common.sh@335 -- # read -ra ver1 00:07:15.250 05:09:31 -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.250 05:09:31 -- scripts/common.sh@336 -- # read -ra ver2 00:07:15.250 05:09:31 -- scripts/common.sh@337 -- # local 'op=<' 00:07:15.250 05:09:31 -- scripts/common.sh@339 -- # ver1_l=2 00:07:15.250 05:09:31 -- scripts/common.sh@340 -- # ver2_l=1 00:07:15.250 05:09:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:15.250 05:09:31 -- scripts/common.sh@343 -- # case "$op" in 00:07:15.250 05:09:31 -- scripts/common.sh@344 -- # : 1 00:07:15.250 05:09:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:15.250 05:09:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.250 05:09:31 -- scripts/common.sh@364 -- # decimal 1 00:07:15.250 05:09:31 -- scripts/common.sh@352 -- # local d=1 00:07:15.250 05:09:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.250 05:09:31 -- scripts/common.sh@354 -- # echo 1 00:07:15.250 05:09:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:15.250 05:09:31 -- scripts/common.sh@365 -- # decimal 2 00:07:15.250 05:09:31 -- scripts/common.sh@352 -- # local d=2 00:07:15.250 05:09:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.250 05:09:31 -- scripts/common.sh@354 -- # echo 2 00:07:15.250 05:09:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:15.250 05:09:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:15.250 05:09:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:15.250 05:09:31 -- scripts/common.sh@367 -- # return 0 00:07:15.250 05:09:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.250 05:09:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.250 --rc genhtml_branch_coverage=1 00:07:15.250 --rc genhtml_function_coverage=1 00:07:15.250 --rc genhtml_legend=1 00:07:15.250 --rc geninfo_all_blocks=1 00:07:15.250 --rc geninfo_unexecuted_blocks=1 00:07:15.250 00:07:15.250 ' 00:07:15.250 05:09:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.250 --rc genhtml_branch_coverage=1 00:07:15.250 --rc genhtml_function_coverage=1 00:07:15.250 --rc genhtml_legend=1 00:07:15.250 --rc geninfo_all_blocks=1 00:07:15.250 --rc geninfo_unexecuted_blocks=1 00:07:15.250 00:07:15.250 ' 00:07:15.250 05:09:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.250 --rc genhtml_branch_coverage=1 00:07:15.250 --rc genhtml_function_coverage=1 00:07:15.250 --rc genhtml_legend=1 00:07:15.250 --rc geninfo_all_blocks=1 00:07:15.250 --rc geninfo_unexecuted_blocks=1 00:07:15.250 00:07:15.250 ' 00:07:15.250 05:09:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.250 --rc genhtml_branch_coverage=1 00:07:15.250 --rc genhtml_function_coverage=1 00:07:15.250 --rc genhtml_legend=1 00:07:15.250 --rc geninfo_all_blocks=1 00:07:15.250 --rc geninfo_unexecuted_blocks=1 00:07:15.250 00:07:15.250 ' 00:07:15.250 05:09:31 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:15.250 05:09:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:15.250 05:09:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.250 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:15.250 ************************************ 00:07:15.250 START TEST thread_poller_perf 00:07:15.250 ************************************ 00:07:15.250 05:09:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:15.250 [2024-11-19 05:09:31.702750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.250 [2024-11-19 05:09:31.702820] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656595 ] 00:07:15.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.250 [2024-11-19 05:09:31.772733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.250 [2024-11-19 05:09:31.809209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.250 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:16.629 [2024-11-19T04:09:33.187Z] ====================================== 00:07:16.629 [2024-11-19T04:09:33.187Z] busy:2506273176 (cyc) 00:07:16.629 [2024-11-19T04:09:33.187Z] total_run_count: 408000 00:07:16.629 [2024-11-19T04:09:33.187Z] tsc_hz: 2500000000 (cyc) 00:07:16.629 [2024-11-19T04:09:33.187Z] ====================================== 00:07:16.629 [2024-11-19T04:09:33.187Z] poller_cost: 6142 (cyc), 2456 (nsec) 00:07:16.629 00:07:16.629 real 0m1.186s 00:07:16.629 user 0m1.101s 00:07:16.629 sys 0m0.080s 00:07:16.629 05:09:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.629 05:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.629 ************************************ 00:07:16.629 END TEST thread_poller_perf 00:07:16.629 ************************************ 00:07:16.629 05:09:32 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.629 05:09:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:16.629 05:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.629 05:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.629 ************************************ 00:07:16.629 START TEST thread_poller_perf 00:07:16.629 ************************************ 00:07:16.629 05:09:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.629 [2024-11-19 05:09:32.943284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.629 [2024-11-19 05:09:32.943376] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656879 ] 00:07:16.629 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.629 [2024-11-19 05:09:33.016833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.629 [2024-11-19 05:09:33.052315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.629 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:17.566 [2024-11-19T04:09:34.124Z] ====================================== 00:07:17.566 [2024-11-19T04:09:34.124Z] busy:2502419890 (cyc) 00:07:17.566 [2024-11-19T04:09:34.124Z] total_run_count: 5629000 00:07:17.566 [2024-11-19T04:09:34.124Z] tsc_hz: 2500000000 (cyc) 00:07:17.566 [2024-11-19T04:09:34.124Z] ====================================== 00:07:17.566 [2024-11-19T04:09:34.124Z] poller_cost: 444 (cyc), 177 (nsec) 00:07:17.566 00:07:17.566 real 0m1.188s 00:07:17.566 user 0m1.090s 00:07:17.566 sys 0m0.094s 00:07:17.566 05:09:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.566 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.566 ************************************ 00:07:17.566 END TEST thread_poller_perf 00:07:17.566 ************************************ 00:07:17.826 05:09:34 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:17.826 00:07:17.826 real 0m2.653s 00:07:17.826 user 0m2.328s 00:07:17.826 sys 0m0.352s 00:07:17.826 05:09:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.826 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.826 ************************************ 00:07:17.826 END TEST thread 00:07:17.826 ************************************ 00:07:17.826 05:09:34 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:17.826 05:09:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.826 05:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.826 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.826 ************************************ 00:07:17.826 START TEST accel 00:07:17.826 ************************************ 00:07:17.826 05:09:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:17.826 * Looking for test storage... 00:07:17.826 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:17.826 05:09:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:17.826 05:09:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.826 05:09:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:17.826 05:09:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.826 05:09:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.826 05:09:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.826 05:09:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.826 05:09:34 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.826 05:09:34 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.826 05:09:34 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.826 05:09:34 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.826 05:09:34 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.826 05:09:34 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.827 05:09:34 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.827 05:09:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.827 05:09:34 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.827 05:09:34 -- scripts/common.sh@344 -- # : 1 00:07:17.827 05:09:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.827 05:09:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.827 05:09:34 -- scripts/common.sh@364 -- # decimal 1 00:07:17.827 05:09:34 -- scripts/common.sh@352 -- # local d=1 00:07:17.827 05:09:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.827 05:09:34 -- scripts/common.sh@354 -- # echo 1 00:07:17.827 05:09:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.827 05:09:34 -- scripts/common.sh@365 -- # decimal 2 00:07:17.827 05:09:34 -- scripts/common.sh@352 -- # local d=2 00:07:17.827 05:09:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.827 05:09:34 -- scripts/common.sh@354 -- # echo 2 00:07:17.827 05:09:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.827 05:09:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.827 05:09:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.827 05:09:34 -- scripts/common.sh@367 -- # return 0 00:07:17.827 05:09:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.827 05:09:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.827 --rc genhtml_branch_coverage=1 00:07:17.827 --rc genhtml_function_coverage=1 00:07:17.827 --rc genhtml_legend=1 00:07:17.827 --rc geninfo_all_blocks=1 00:07:17.827 --rc geninfo_unexecuted_blocks=1 00:07:17.827 00:07:17.827 ' 00:07:17.827 05:09:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.827 --rc genhtml_branch_coverage=1 00:07:17.827 --rc genhtml_function_coverage=1 00:07:17.827 --rc genhtml_legend=1 00:07:17.827 --rc geninfo_all_blocks=1 00:07:17.827 --rc geninfo_unexecuted_blocks=1 00:07:17.827 00:07:17.827 ' 00:07:17.827 05:09:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.827 --rc genhtml_branch_coverage=1 00:07:17.827 --rc genhtml_function_coverage=1 00:07:17.827 --rc genhtml_legend=1 00:07:17.827 --rc geninfo_all_blocks=1 00:07:17.827 --rc geninfo_unexecuted_blocks=1 00:07:17.827 00:07:17.827 ' 00:07:17.827 05:09:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.827 --rc genhtml_branch_coverage=1 00:07:17.827 --rc genhtml_function_coverage=1 00:07:17.827 --rc genhtml_legend=1 00:07:17.827 --rc geninfo_all_blocks=1 00:07:17.827 --rc geninfo_unexecuted_blocks=1 00:07:17.827 00:07:17.827 ' 00:07:17.827 05:09:34 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:17.827 05:09:34 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:17.827 05:09:34 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.827 05:09:34 -- accel/accel.sh@59 -- # spdk_tgt_pid=1657217 00:07:17.827 05:09:34 -- accel/accel.sh@60 -- # waitforlisten 1657217 00:07:17.827 05:09:34 -- common/autotest_common.sh@829 -- # '[' -z 1657217 ']' 00:07:17.827 05:09:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.827 05:09:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.827 05:09:34 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:17.827 05:09:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.827 05:09:34 -- accel/accel.sh@58 -- # build_accel_config 00:07:17.827 05:09:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.827 05:09:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.827 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.827 05:09:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.827 05:09:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.827 05:09:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.827 05:09:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.827 05:09:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.827 05:09:34 -- accel/accel.sh@42 -- # jq -r . 00:07:18.087 [2024-11-19 05:09:34.418079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.087 [2024-11-19 05:09:34.418132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657217 ] 00:07:18.087 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.087 [2024-11-19 05:09:34.486322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.087 [2024-11-19 05:09:34.522860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.087 [2024-11-19 05:09:34.522985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.795 05:09:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.795 05:09:35 -- common/autotest_common.sh@862 -- # return 0 00:07:18.795 05:09:35 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:18.795 05:09:35 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:18.795 05:09:35 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:18.795 05:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.795 05:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:18.795 05:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # IFS== 00:07:18.795 05:09:35 -- accel/accel.sh@64 -- # read -r opc module 00:07:18.795 05:09:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:18.795 05:09:35 -- accel/accel.sh@67 -- # killprocess 1657217 00:07:18.795 05:09:35 -- common/autotest_common.sh@936 -- # '[' -z 1657217 ']' 00:07:18.795 05:09:35 -- common/autotest_common.sh@940 -- # kill -0 1657217 00:07:18.795 05:09:35 -- common/autotest_common.sh@941 -- # uname 00:07:18.795 05:09:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.795 05:09:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1657217 00:07:18.795 05:09:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.795 05:09:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.795 05:09:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1657217' 00:07:18.795 killing process with pid 1657217 00:07:18.795 05:09:35 -- common/autotest_common.sh@955 -- # kill 1657217 00:07:18.795 05:09:35 -- common/autotest_common.sh@960 -- # wait 1657217 00:07:19.365 05:09:35 -- accel/accel.sh@68 -- # trap - ERR 00:07:19.365 05:09:35 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:19.365 05:09:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:19.365 05:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.365 05:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 05:09:35 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:19.365 05:09:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:19.365 05:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.365 05:09:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.365 05:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.365 05:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.365 05:09:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.365 05:09:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.365 05:09:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.365 05:09:35 -- accel/accel.sh@42 -- # jq -r . 00:07:19.365 05:09:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.365 05:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 05:09:35 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:19.365 05:09:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:19.365 05:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.365 05:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 ************************************ 00:07:19.365 START TEST accel_missing_filename 00:07:19.365 ************************************ 00:07:19.365 05:09:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:19.365 05:09:35 -- common/autotest_common.sh@650 -- # local es=0 00:07:19.365 05:09:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:19.365 05:09:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:19.365 05:09:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.365 05:09:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:19.365 05:09:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.365 05:09:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:19.365 05:09:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:19.365 05:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.365 05:09:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.365 05:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.365 05:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.365 05:09:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.365 05:09:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.365 05:09:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.365 05:09:35 -- accel/accel.sh@42 -- # jq -r . 00:07:19.365 [2024-11-19 05:09:35.719262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.365 [2024-11-19 05:09:35.719323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657496 ] 00:07:19.365 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.365 [2024-11-19 05:09:35.785753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.365 [2024-11-19 05:09:35.821494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.365 [2024-11-19 05:09:35.861634] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.365 [2024-11-19 05:09:35.921249] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:19.625 A filename is required. 00:07:19.625 05:09:35 -- common/autotest_common.sh@653 -- # es=234 00:07:19.625 05:09:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.625 05:09:35 -- common/autotest_common.sh@662 -- # es=106 00:07:19.625 05:09:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:19.625 05:09:35 -- common/autotest_common.sh@670 -- # es=1 00:07:19.625 05:09:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.625 00:07:19.625 real 0m0.279s 00:07:19.625 user 0m0.197s 00:07:19.625 sys 0m0.120s 00:07:19.625 05:09:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.625 05:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.625 ************************************ 00:07:19.625 END TEST accel_missing_filename 00:07:19.625 ************************************ 00:07:19.625 05:09:36 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.625 05:09:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:19.625 05:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.625 05:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:19.625 ************************************ 00:07:19.625 START TEST accel_compress_verify 00:07:19.625 ************************************ 00:07:19.625 05:09:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.625 05:09:36 -- common/autotest_common.sh@650 -- # local es=0 00:07:19.625 05:09:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.625 05:09:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:19.625 05:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.625 05:09:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:19.625 05:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.625 05:09:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.625 05:09:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.625 05:09:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.625 05:09:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.625 05:09:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.625 05:09:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.625 05:09:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.625 05:09:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.625 05:09:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.625 05:09:36 -- accel/accel.sh@42 -- # jq -r . 00:07:19.625 [2024-11-19 05:09:36.054681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.625 [2024-11-19 05:09:36.054748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657546 ] 00:07:19.625 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.625 [2024-11-19 05:09:36.124128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.625 [2024-11-19 05:09:36.158914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.885 [2024-11-19 05:09:36.199684] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.885 [2024-11-19 05:09:36.259355] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:19.885 00:07:19.885 Compression does not support the verify option, aborting. 00:07:19.885 05:09:36 -- common/autotest_common.sh@653 -- # es=161 00:07:19.885 05:09:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.885 05:09:36 -- common/autotest_common.sh@662 -- # es=33 00:07:19.885 05:09:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:19.885 05:09:36 -- common/autotest_common.sh@670 -- # es=1 00:07:19.885 05:09:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.885 00:07:19.885 real 0m0.294s 00:07:19.885 user 0m0.195s 00:07:19.885 sys 0m0.138s 00:07:19.885 05:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.885 05:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:19.885 ************************************ 00:07:19.885 END TEST accel_compress_verify 00:07:19.885 ************************************ 00:07:19.885 05:09:36 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:19.885 05:09:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:19.885 05:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.885 05:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:19.885 ************************************ 00:07:19.885 START TEST accel_wrong_workload 00:07:19.885 ************************************ 00:07:19.885 05:09:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:19.885 05:09:36 -- common/autotest_common.sh@650 -- # local es=0 00:07:19.885 05:09:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:19.885 05:09:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:19.885 05:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.885 05:09:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:19.885 05:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.885 05:09:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:19.885 05:09:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:19.885 05:09:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.885 05:09:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.885 05:09:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.885 05:09:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.885 05:09:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.885 05:09:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.885 05:09:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.885 05:09:36 -- accel/accel.sh@42 -- # jq -r . 00:07:19.885 Unsupported workload type: foobar 00:07:19.885 [2024-11-19 05:09:36.389464] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:19.885 accel_perf options: 00:07:19.885 [-h help message] 00:07:19.885 [-q queue depth per core] 00:07:19.885 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:19.885 [-T number of threads per core 00:07:19.885 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:19.885 [-t time in seconds] 00:07:19.885 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:19.885 [ dif_verify, , dif_generate, dif_generate_copy 00:07:19.885 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:19.886 [-l for compress/decompress workloads, name of uncompressed input file 00:07:19.886 [-S for crc32c workload, use this seed value (default 0) 00:07:19.886 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:19.886 [-f for fill workload, use this BYTE value (default 255) 00:07:19.886 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:19.886 [-y verify result if this switch is on] 00:07:19.886 [-a tasks to allocate per core (default: same value as -q)] 00:07:19.886 Can be used to spread operations across a wider range of memory. 00:07:19.886 05:09:36 -- common/autotest_common.sh@653 -- # es=1 00:07:19.886 05:09:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.886 05:09:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.886 05:09:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.886 00:07:19.886 real 0m0.032s 00:07:19.886 user 0m0.016s 00:07:19.886 sys 0m0.016s 00:07:19.886 05:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.886 05:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:19.886 ************************************ 00:07:19.886 END TEST accel_wrong_workload 00:07:19.886 ************************************ 00:07:19.886 Error: writing output failed: Broken pipe 00:07:19.886 05:09:36 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:19.886 05:09:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:19.886 05:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.886 05:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:19.886 ************************************ 00:07:19.886 START TEST accel_negative_buffers 00:07:19.886 ************************************ 00:07:19.886 05:09:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:19.886 05:09:36 -- common/autotest_common.sh@650 -- # local es=0 00:07:19.886 05:09:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:19.886 05:09:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:19.886 05:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.886 05:09:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:19.886 05:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.886 05:09:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:20.146 05:09:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:20.146 05:09:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.146 05:09:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.146 05:09:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.146 05:09:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.146 05:09:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.146 05:09:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.146 05:09:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.146 05:09:36 -- accel/accel.sh@42 -- # jq -r . 00:07:20.146 -x option must be non-negative. 00:07:20.146 [2024-11-19 05:09:36.470482] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:20.146 accel_perf options: 00:07:20.146 [-h help message] 00:07:20.146 [-q queue depth per core] 00:07:20.146 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.146 [-T number of threads per core 00:07:20.146 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.146 [-t time in seconds] 00:07:20.146 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.146 [ dif_verify, , dif_generate, dif_generate_copy 00:07:20.146 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.146 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.146 [-S for crc32c workload, use this seed value (default 0) 00:07:20.146 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.146 [-f for fill workload, use this BYTE value (default 255) 00:07:20.146 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.146 [-y verify result if this switch is on] 00:07:20.146 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.146 Can be used to spread operations across a wider range of memory. 00:07:20.146 05:09:36 -- common/autotest_common.sh@653 -- # es=1 00:07:20.146 05:09:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.146 05:09:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.146 05:09:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.146 00:07:20.146 real 0m0.035s 00:07:20.146 user 0m0.021s 00:07:20.146 sys 0m0.014s 00:07:20.146 05:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.146 05:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:20.146 ************************************ 00:07:20.146 END TEST accel_negative_buffers 00:07:20.146 ************************************ 00:07:20.146 Error: writing output failed: Broken pipe 00:07:20.146 05:09:36 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:20.146 05:09:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:20.146 05:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.146 05:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:20.146 ************************************ 00:07:20.146 START TEST accel_crc32c 00:07:20.146 ************************************ 00:07:20.146 05:09:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:20.146 05:09:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.146 05:09:36 -- accel/accel.sh@17 -- # local accel_module 00:07:20.146 05:09:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:20.146 05:09:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:20.146 05:09:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.146 05:09:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.146 05:09:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.146 05:09:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.146 05:09:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.146 05:09:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.146 05:09:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.146 05:09:36 -- accel/accel.sh@42 -- # jq -r . 00:07:20.146 [2024-11-19 05:09:36.551585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.146 [2024-11-19 05:09:36.551649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657605 ] 00:07:20.146 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.146 [2024-11-19 05:09:36.623143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.146 [2024-11-19 05:09:36.662690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.525 05:09:37 -- accel/accel.sh@18 -- # out=' 00:07:21.525 SPDK Configuration: 00:07:21.525 Core mask: 0x1 00:07:21.525 00:07:21.525 Accel Perf Configuration: 00:07:21.525 Workload Type: crc32c 00:07:21.525 CRC-32C seed: 32 00:07:21.525 Transfer size: 4096 bytes 00:07:21.525 Vector count 1 00:07:21.525 Module: software 00:07:21.525 Queue depth: 32 00:07:21.525 Allocate depth: 32 00:07:21.525 # threads/core: 1 00:07:21.525 Run time: 1 seconds 00:07:21.526 Verify: Yes 00:07:21.526 00:07:21.526 Running for 1 seconds... 00:07:21.526 00:07:21.526 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.526 ------------------------------------------------------------------------------------ 00:07:21.526 0,0 592000/s 2312 MiB/s 0 0 00:07:21.526 ==================================================================================== 00:07:21.526 Total 592000/s 2312 MiB/s 0 0' 00:07:21.526 05:09:37 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:37 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:21.526 05:09:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:21.526 05:09:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.526 05:09:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.526 05:09:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.526 05:09:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.526 05:09:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.526 05:09:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.526 05:09:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.526 05:09:37 -- accel/accel.sh@42 -- # jq -r . 00:07:21.526 [2024-11-19 05:09:37.856834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.526 [2024-11-19 05:09:37.856900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657871 ] 00:07:21.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.526 [2024-11-19 05:09:37.925298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.526 [2024-11-19 05:09:37.959638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.526 05:09:37 -- accel/accel.sh@21 -- # val= 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val= 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=0x1 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val= 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val= 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=crc32c 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=32 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val= 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=software 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=32 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=32 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=1 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val=Yes 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val= 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.526 05:09:38 -- accel/accel.sh@21 -- # val= 00:07:21.526 05:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.526 05:09:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.906 05:09:39 -- accel/accel.sh@21 -- # val= 00:07:22.906 05:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.906 05:09:39 -- accel/accel.sh@21 -- # val= 00:07:22.906 05:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.906 05:09:39 -- accel/accel.sh@21 -- # val= 00:07:22.906 05:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.906 05:09:39 -- accel/accel.sh@21 -- # val= 00:07:22.906 05:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.906 05:09:39 -- accel/accel.sh@21 -- # val= 00:07:22.906 05:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.906 05:09:39 -- accel/accel.sh@21 -- # val= 00:07:22.906 05:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.906 05:09:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.906 05:09:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.906 05:09:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:22.906 05:09:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.906 00:07:22.906 real 0m2.601s 00:07:22.906 user 0m2.343s 00:07:22.906 sys 0m0.257s 00:07:22.906 05:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.906 05:09:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.906 ************************************ 00:07:22.906 END TEST accel_crc32c 00:07:22.906 ************************************ 00:07:22.906 05:09:39 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:22.906 05:09:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:22.906 05:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.906 05:09:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.906 ************************************ 00:07:22.906 START TEST accel_crc32c_C2 00:07:22.906 ************************************ 00:07:22.906 05:09:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:22.906 05:09:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.906 05:09:39 -- accel/accel.sh@17 -- # local accel_module 00:07:22.906 05:09:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:22.906 05:09:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:22.906 05:09:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.906 05:09:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.906 05:09:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.906 05:09:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.906 05:09:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.906 05:09:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.906 05:09:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.906 05:09:39 -- accel/accel.sh@42 -- # jq -r . 00:07:22.906 [2024-11-19 05:09:39.195178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.906 [2024-11-19 05:09:39.195243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658158 ] 00:07:22.906 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.906 [2024-11-19 05:09:39.262793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.906 [2024-11-19 05:09:39.297623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.286 05:09:40 -- accel/accel.sh@18 -- # out=' 00:07:24.286 SPDK Configuration: 00:07:24.286 Core mask: 0x1 00:07:24.286 00:07:24.286 Accel Perf Configuration: 00:07:24.286 Workload Type: crc32c 00:07:24.286 CRC-32C seed: 0 00:07:24.286 Transfer size: 4096 bytes 00:07:24.286 Vector count 2 00:07:24.286 Module: software 00:07:24.286 Queue depth: 32 00:07:24.286 Allocate depth: 32 00:07:24.286 # threads/core: 1 00:07:24.286 Run time: 1 seconds 00:07:24.286 Verify: Yes 00:07:24.286 00:07:24.286 Running for 1 seconds... 00:07:24.286 00:07:24.286 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.286 ------------------------------------------------------------------------------------ 00:07:24.286 0,0 472544/s 3691 MiB/s 0 0 00:07:24.286 ==================================================================================== 00:07:24.286 Total 472544/s 1845 MiB/s 0 0' 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:24.286 05:09:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:24.286 05:09:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.286 05:09:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.286 05:09:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.286 05:09:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.286 05:09:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.286 05:09:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.286 05:09:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.286 05:09:40 -- accel/accel.sh@42 -- # jq -r . 00:07:24.286 [2024-11-19 05:09:40.489662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.286 [2024-11-19 05:09:40.489727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658431 ] 00:07:24.286 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.286 [2024-11-19 05:09:40.559385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.286 [2024-11-19 05:09:40.594015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val= 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val= 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=0x1 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val= 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val= 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=crc32c 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=0 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val= 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=software 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=32 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=32 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=1 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val=Yes 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val= 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.286 05:09:40 -- accel/accel.sh@21 -- # val= 00:07:24.286 05:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.286 05:09:40 -- accel/accel.sh@20 -- # read -r var val 00:07:25.226 05:09:41 -- accel/accel.sh@21 -- # val= 00:07:25.226 05:09:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.226 05:09:41 -- accel/accel.sh@21 -- # val= 00:07:25.226 05:09:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.226 05:09:41 -- accel/accel.sh@21 -- # val= 00:07:25.226 05:09:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.226 05:09:41 -- accel/accel.sh@21 -- # val= 00:07:25.226 05:09:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.226 05:09:41 -- accel/accel.sh@21 -- # val= 00:07:25.226 05:09:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.226 05:09:41 -- accel/accel.sh@21 -- # val= 00:07:25.226 05:09:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.226 05:09:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.226 05:09:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.226 05:09:41 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:25.226 05:09:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.226 00:07:25.226 real 0m2.593s 00:07:25.226 user 0m2.345s 00:07:25.226 sys 0m0.247s 00:07:25.226 05:09:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.226 05:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:25.226 ************************************ 00:07:25.226 END TEST accel_crc32c_C2 00:07:25.226 ************************************ 00:07:25.485 05:09:41 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:25.485 05:09:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:25.485 05:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.485 05:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:25.485 ************************************ 00:07:25.485 START TEST accel_copy 00:07:25.485 ************************************ 00:07:25.485 05:09:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:25.485 05:09:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.485 05:09:41 -- accel/accel.sh@17 -- # local accel_module 00:07:25.485 05:09:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:25.485 05:09:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:25.485 05:09:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.485 05:09:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.485 05:09:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.485 05:09:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.485 05:09:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.485 05:09:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.485 05:09:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.485 05:09:41 -- accel/accel.sh@42 -- # jq -r . 00:07:25.485 [2024-11-19 05:09:41.832526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.485 [2024-11-19 05:09:41.832613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658697 ] 00:07:25.485 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.485 [2024-11-19 05:09:41.902012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.485 [2024-11-19 05:09:41.937171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.865 05:09:43 -- accel/accel.sh@18 -- # out=' 00:07:26.865 SPDK Configuration: 00:07:26.865 Core mask: 0x1 00:07:26.865 00:07:26.865 Accel Perf Configuration: 00:07:26.865 Workload Type: copy 00:07:26.865 Transfer size: 4096 bytes 00:07:26.865 Vector count 1 00:07:26.865 Module: software 00:07:26.865 Queue depth: 32 00:07:26.865 Allocate depth: 32 00:07:26.865 # threads/core: 1 00:07:26.865 Run time: 1 seconds 00:07:26.866 Verify: Yes 00:07:26.866 00:07:26.866 Running for 1 seconds... 00:07:26.866 00:07:26.866 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.866 ------------------------------------------------------------------------------------ 00:07:26.866 0,0 454688/s 1776 MiB/s 0 0 00:07:26.866 ==================================================================================== 00:07:26.866 Total 454688/s 1776 MiB/s 0 0' 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:26.866 05:09:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:26.866 05:09:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.866 05:09:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.866 05:09:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.866 05:09:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.866 05:09:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.866 05:09:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.866 05:09:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.866 05:09:43 -- accel/accel.sh@42 -- # jq -r . 00:07:26.866 [2024-11-19 05:09:43.130362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.866 [2024-11-19 05:09:43.130427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658842 ] 00:07:26.866 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.866 [2024-11-19 05:09:43.199512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.866 [2024-11-19 05:09:43.233678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val= 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val= 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val=0x1 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val= 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val= 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val=copy 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val= 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val=software 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val=32 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val=32 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val=1 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val=Yes 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val= 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:26.866 05:09:43 -- accel/accel.sh@21 -- # val= 00:07:26.866 05:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # IFS=: 00:07:26.866 05:09:43 -- accel/accel.sh@20 -- # read -r var val 00:07:28.246 05:09:44 -- accel/accel.sh@21 -- # val= 00:07:28.246 05:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.246 05:09:44 -- accel/accel.sh@21 -- # val= 00:07:28.246 05:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.246 05:09:44 -- accel/accel.sh@21 -- # val= 00:07:28.246 05:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.246 05:09:44 -- accel/accel.sh@21 -- # val= 00:07:28.246 05:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.246 05:09:44 -- accel/accel.sh@21 -- # val= 00:07:28.246 05:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.246 05:09:44 -- accel/accel.sh@21 -- # val= 00:07:28.246 05:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.246 05:09:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.246 05:09:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.246 05:09:44 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:28.246 05:09:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.246 00:07:28.246 real 0m2.594s 00:07:28.246 user 0m2.339s 00:07:28.246 sys 0m0.254s 00:07:28.246 05:09:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.246 05:09:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.246 ************************************ 00:07:28.246 END TEST accel_copy 00:07:28.246 ************************************ 00:07:28.246 05:09:44 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:28.246 05:09:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:28.247 05:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.247 05:09:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.247 ************************************ 00:07:28.247 START TEST accel_fill 00:07:28.247 ************************************ 00:07:28.247 05:09:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:28.247 05:09:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.247 05:09:44 -- accel/accel.sh@17 -- # local accel_module 00:07:28.247 05:09:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:28.247 05:09:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:28.247 05:09:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.247 05:09:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.247 05:09:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.247 05:09:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.247 05:09:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.247 05:09:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.247 05:09:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.247 05:09:44 -- accel/accel.sh@42 -- # jq -r . 00:07:28.247 [2024-11-19 05:09:44.469956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.247 [2024-11-19 05:09:44.470024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659039 ] 00:07:28.247 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.247 [2024-11-19 05:09:44.538445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.247 [2024-11-19 05:09:44.573849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.185 05:09:45 -- accel/accel.sh@18 -- # out=' 00:07:29.185 SPDK Configuration: 00:07:29.185 Core mask: 0x1 00:07:29.185 00:07:29.185 Accel Perf Configuration: 00:07:29.185 Workload Type: fill 00:07:29.185 Fill pattern: 0x80 00:07:29.185 Transfer size: 4096 bytes 00:07:29.185 Vector count 1 00:07:29.185 Module: software 00:07:29.185 Queue depth: 64 00:07:29.185 Allocate depth: 64 00:07:29.185 # threads/core: 1 00:07:29.185 Run time: 1 seconds 00:07:29.185 Verify: Yes 00:07:29.185 00:07:29.185 Running for 1 seconds... 00:07:29.185 00:07:29.185 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.185 ------------------------------------------------------------------------------------ 00:07:29.185 0,0 699520/s 2732 MiB/s 0 0 00:07:29.185 ==================================================================================== 00:07:29.185 Total 699520/s 2732 MiB/s 0 0' 00:07:29.185 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.185 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.185 05:09:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:29.185 05:09:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:29.185 05:09:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.185 05:09:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.185 05:09:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.185 05:09:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.185 05:09:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.185 05:09:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.185 05:09:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.185 05:09:45 -- accel/accel.sh@42 -- # jq -r . 00:07:29.444 [2024-11-19 05:09:45.765237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.444 [2024-11-19 05:09:45.765305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659288 ] 00:07:29.444 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.444 [2024-11-19 05:09:45.834175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.444 [2024-11-19 05:09:45.868193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val= 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val= 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=0x1 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val= 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val= 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=fill 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=0x80 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val= 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=software 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=64 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=64 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=1 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val=Yes 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val= 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.444 05:09:45 -- accel/accel.sh@21 -- # val= 00:07:29.444 05:09:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.444 05:09:45 -- accel/accel.sh@20 -- # read -r var val 00:07:30.820 05:09:47 -- accel/accel.sh@21 -- # val= 00:07:30.820 05:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.820 05:09:47 -- accel/accel.sh@21 -- # val= 00:07:30.820 05:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.820 05:09:47 -- accel/accel.sh@21 -- # val= 00:07:30.820 05:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.820 05:09:47 -- accel/accel.sh@21 -- # val= 00:07:30.820 05:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.820 05:09:47 -- accel/accel.sh@21 -- # val= 00:07:30.820 05:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.820 05:09:47 -- accel/accel.sh@21 -- # val= 00:07:30.820 05:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.820 05:09:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.820 05:09:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.820 05:09:47 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:30.820 05:09:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.820 00:07:30.820 real 0m2.590s 00:07:30.820 user 0m2.341s 00:07:30.820 sys 0m0.248s 00:07:30.820 05:09:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.820 05:09:47 -- common/autotest_common.sh@10 -- # set +x 00:07:30.820 ************************************ 00:07:30.820 END TEST accel_fill 00:07:30.820 ************************************ 00:07:30.820 05:09:47 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:30.820 05:09:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:30.820 05:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.820 05:09:47 -- common/autotest_common.sh@10 -- # set +x 00:07:30.820 ************************************ 00:07:30.820 START TEST accel_copy_crc32c 00:07:30.820 ************************************ 00:07:30.820 05:09:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:30.820 05:09:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.820 05:09:47 -- accel/accel.sh@17 -- # local accel_module 00:07:30.820 05:09:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:30.820 05:09:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:30.820 05:09:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.820 05:09:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.820 05:09:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.820 05:09:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.820 05:09:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.820 05:09:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.820 05:09:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.820 05:09:47 -- accel/accel.sh@42 -- # jq -r . 00:07:30.820 [2024-11-19 05:09:47.103948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.820 [2024-11-19 05:09:47.104016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659573 ] 00:07:30.820 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.820 [2024-11-19 05:09:47.172063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.820 [2024-11-19 05:09:47.206810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.198 05:09:48 -- accel/accel.sh@18 -- # out=' 00:07:32.198 SPDK Configuration: 00:07:32.198 Core mask: 0x1 00:07:32.198 00:07:32.198 Accel Perf Configuration: 00:07:32.198 Workload Type: copy_crc32c 00:07:32.198 CRC-32C seed: 0 00:07:32.198 Vector size: 4096 bytes 00:07:32.198 Transfer size: 4096 bytes 00:07:32.198 Vector count 1 00:07:32.198 Module: software 00:07:32.198 Queue depth: 32 00:07:32.198 Allocate depth: 32 00:07:32.198 # threads/core: 1 00:07:32.198 Run time: 1 seconds 00:07:32.198 Verify: Yes 00:07:32.198 00:07:32.198 Running for 1 seconds... 00:07:32.198 00:07:32.198 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.198 ------------------------------------------------------------------------------------ 00:07:32.198 0,0 349312/s 1364 MiB/s 0 0 00:07:32.198 ==================================================================================== 00:07:32.198 Total 349312/s 1364 MiB/s 0 0' 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:32.198 05:09:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:32.198 05:09:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.198 05:09:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.198 05:09:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.198 05:09:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.198 05:09:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.198 05:09:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.198 05:09:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.198 05:09:48 -- accel/accel.sh@42 -- # jq -r . 00:07:32.198 [2024-11-19 05:09:48.397720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.198 [2024-11-19 05:09:48.397786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659842 ] 00:07:32.198 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.198 [2024-11-19 05:09:48.466413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.198 [2024-11-19 05:09:48.500498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val= 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val= 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val=0x1 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val= 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val= 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val=0 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val= 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.198 05:09:48 -- accel/accel.sh@21 -- # val=software 00:07:32.198 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.198 05:09:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.198 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.199 05:09:48 -- accel/accel.sh@21 -- # val=32 00:07:32.199 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.199 05:09:48 -- accel/accel.sh@21 -- # val=32 00:07:32.199 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.199 05:09:48 -- accel/accel.sh@21 -- # val=1 00:07:32.199 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.199 05:09:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.199 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.199 05:09:48 -- accel/accel.sh@21 -- # val=Yes 00:07:32.199 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.199 05:09:48 -- accel/accel.sh@21 -- # val= 00:07:32.199 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.199 05:09:48 -- accel/accel.sh@21 -- # val= 00:07:32.199 05:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.199 05:09:48 -- accel/accel.sh@20 -- # read -r var val 00:07:33.136 05:09:49 -- accel/accel.sh@21 -- # val= 00:07:33.136 05:09:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.136 05:09:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.136 05:09:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.136 05:09:49 -- accel/accel.sh@21 -- # val= 00:07:33.136 05:09:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.136 05:09:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.136 05:09:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.137 05:09:49 -- accel/accel.sh@21 -- # val= 00:07:33.137 05:09:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.137 05:09:49 -- accel/accel.sh@21 -- # val= 00:07:33.137 05:09:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.137 05:09:49 -- accel/accel.sh@21 -- # val= 00:07:33.137 05:09:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.137 05:09:49 -- accel/accel.sh@21 -- # val= 00:07:33.137 05:09:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.137 05:09:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.137 05:09:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.137 05:09:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:33.137 05:09:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.137 00:07:33.137 real 0m2.588s 00:07:33.137 user 0m2.339s 00:07:33.137 sys 0m0.248s 00:07:33.137 05:09:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.137 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:33.137 ************************************ 00:07:33.137 END TEST accel_copy_crc32c 00:07:33.137 ************************************ 00:07:33.396 05:09:49 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:33.396 05:09:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:33.396 05:09:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.396 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:33.396 ************************************ 00:07:33.396 START TEST accel_copy_crc32c_C2 00:07:33.396 ************************************ 00:07:33.396 05:09:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:33.396 05:09:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.396 05:09:49 -- accel/accel.sh@17 -- # local accel_module 00:07:33.396 05:09:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:33.396 05:09:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:33.396 05:09:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.396 05:09:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.396 05:09:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.396 05:09:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.396 05:09:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.396 05:09:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.396 05:09:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.396 05:09:49 -- accel/accel.sh@42 -- # jq -r . 00:07:33.396 [2024-11-19 05:09:49.733790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.396 [2024-11-19 05:09:49.733855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660130 ] 00:07:33.396 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.396 [2024-11-19 05:09:49.802689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.396 [2024-11-19 05:09:49.837287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.777 05:09:51 -- accel/accel.sh@18 -- # out=' 00:07:34.777 SPDK Configuration: 00:07:34.777 Core mask: 0x1 00:07:34.777 00:07:34.777 Accel Perf Configuration: 00:07:34.777 Workload Type: copy_crc32c 00:07:34.777 CRC-32C seed: 0 00:07:34.777 Vector size: 4096 bytes 00:07:34.777 Transfer size: 8192 bytes 00:07:34.777 Vector count 2 00:07:34.777 Module: software 00:07:34.777 Queue depth: 32 00:07:34.777 Allocate depth: 32 00:07:34.777 # threads/core: 1 00:07:34.777 Run time: 1 seconds 00:07:34.777 Verify: Yes 00:07:34.777 00:07:34.777 Running for 1 seconds... 00:07:34.777 00:07:34.777 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.777 ------------------------------------------------------------------------------------ 00:07:34.777 0,0 242624/s 1895 MiB/s 0 0 00:07:34.777 ==================================================================================== 00:07:34.777 Total 242624/s 947 MiB/s 0 0' 00:07:34.777 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.777 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.777 05:09:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:34.777 05:09:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:34.777 05:09:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.777 05:09:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.777 05:09:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.777 05:09:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.777 05:09:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.777 05:09:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.777 05:09:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.777 05:09:51 -- accel/accel.sh@42 -- # jq -r . 00:07:34.777 [2024-11-19 05:09:51.031306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.777 [2024-11-19 05:09:51.031371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660357 ] 00:07:34.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.777 [2024-11-19 05:09:51.100597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.777 [2024-11-19 05:09:51.135692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val= 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val= 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=0x1 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val= 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val= 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=0 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val= 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=software 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=32 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=32 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=1 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val=Yes 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val= 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.778 05:09:51 -- accel/accel.sh@21 -- # val= 00:07:34.778 05:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.778 05:09:51 -- accel/accel.sh@20 -- # read -r var val 00:07:36.159 05:09:52 -- accel/accel.sh@21 -- # val= 00:07:36.159 05:09:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.159 05:09:52 -- accel/accel.sh@21 -- # val= 00:07:36.159 05:09:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.159 05:09:52 -- accel/accel.sh@21 -- # val= 00:07:36.159 05:09:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.159 05:09:52 -- accel/accel.sh@21 -- # val= 00:07:36.159 05:09:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.159 05:09:52 -- accel/accel.sh@21 -- # val= 00:07:36.159 05:09:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.159 05:09:52 -- accel/accel.sh@21 -- # val= 00:07:36.159 05:09:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.159 05:09:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.159 05:09:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.159 05:09:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:36.159 05:09:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.159 00:07:36.159 real 0m2.598s 00:07:36.159 user 0m2.370s 00:07:36.159 sys 0m0.228s 00:07:36.159 05:09:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.159 05:09:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 ************************************ 00:07:36.159 END TEST accel_copy_crc32c_C2 00:07:36.159 ************************************ 00:07:36.159 05:09:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:36.159 05:09:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:36.159 05:09:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.159 05:09:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 ************************************ 00:07:36.159 START TEST accel_dualcast 00:07:36.159 ************************************ 00:07:36.159 05:09:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:36.159 05:09:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.159 05:09:52 -- accel/accel.sh@17 -- # local accel_module 00:07:36.159 05:09:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:36.159 05:09:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:36.159 05:09:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.159 05:09:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.159 05:09:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.159 05:09:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.159 05:09:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.159 05:09:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.159 05:09:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.159 05:09:52 -- accel/accel.sh@42 -- # jq -r . 00:07:36.159 [2024-11-19 05:09:52.370879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.159 [2024-11-19 05:09:52.370948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660539 ] 00:07:36.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.159 [2024-11-19 05:09:52.441190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.159 [2024-11-19 05:09:52.478291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.098 05:09:53 -- accel/accel.sh@18 -- # out=' 00:07:37.098 SPDK Configuration: 00:07:37.098 Core mask: 0x1 00:07:37.098 00:07:37.098 Accel Perf Configuration: 00:07:37.098 Workload Type: dualcast 00:07:37.098 Transfer size: 4096 bytes 00:07:37.098 Vector count 1 00:07:37.098 Module: software 00:07:37.098 Queue depth: 32 00:07:37.098 Allocate depth: 32 00:07:37.098 # threads/core: 1 00:07:37.098 Run time: 1 seconds 00:07:37.098 Verify: Yes 00:07:37.098 00:07:37.098 Running for 1 seconds... 00:07:37.098 00:07:37.098 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.098 ------------------------------------------------------------------------------------ 00:07:37.098 0,0 534560/s 2088 MiB/s 0 0 00:07:37.098 ==================================================================================== 00:07:37.098 Total 534560/s 2088 MiB/s 0 0' 00:07:37.098 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.098 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.098 05:09:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:37.098 05:09:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:37.098 05:09:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.098 05:09:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.098 05:09:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.098 05:09:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.098 05:09:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.098 05:09:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.098 05:09:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.098 05:09:53 -- accel/accel.sh@42 -- # jq -r . 00:07:37.358 [2024-11-19 05:09:53.671678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.358 [2024-11-19 05:09:53.671745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660709 ] 00:07:37.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.358 [2024-11-19 05:09:53.740498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.358 [2024-11-19 05:09:53.774364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val= 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val= 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val=0x1 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val= 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val= 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val=dualcast 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val= 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val=software 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val=32 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val=32 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val=1 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val=Yes 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val= 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.358 05:09:53 -- accel/accel.sh@21 -- # val= 00:07:37.358 05:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.358 05:09:53 -- accel/accel.sh@20 -- # read -r var val 00:07:38.741 05:09:54 -- accel/accel.sh@21 -- # val= 00:07:38.741 05:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # IFS=: 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # read -r var val 00:07:38.741 05:09:54 -- accel/accel.sh@21 -- # val= 00:07:38.741 05:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # IFS=: 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # read -r var val 00:07:38.741 05:09:54 -- accel/accel.sh@21 -- # val= 00:07:38.741 05:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # IFS=: 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # read -r var val 00:07:38.741 05:09:54 -- accel/accel.sh@21 -- # val= 00:07:38.741 05:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # IFS=: 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # read -r var val 00:07:38.741 05:09:54 -- accel/accel.sh@21 -- # val= 00:07:38.741 05:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # IFS=: 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # read -r var val 00:07:38.741 05:09:54 -- accel/accel.sh@21 -- # val= 00:07:38.741 05:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # IFS=: 00:07:38.741 05:09:54 -- accel/accel.sh@20 -- # read -r var val 00:07:38.741 05:09:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.741 05:09:54 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:38.741 05:09:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.741 00:07:38.741 real 0m2.595s 00:07:38.741 user 0m2.341s 00:07:38.741 sys 0m0.254s 00:07:38.741 05:09:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.741 05:09:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.741 ************************************ 00:07:38.741 END TEST accel_dualcast 00:07:38.741 ************************************ 00:07:38.741 05:09:54 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:38.741 05:09:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:38.741 05:09:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.741 05:09:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.741 ************************************ 00:07:38.741 START TEST accel_compare 00:07:38.741 ************************************ 00:07:38.741 05:09:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:38.741 05:09:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.741 05:09:54 -- accel/accel.sh@17 -- # local accel_module 00:07:38.741 05:09:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:38.741 05:09:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:38.741 05:09:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.741 05:09:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.741 05:09:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.741 05:09:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.741 05:09:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.741 05:09:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.741 05:09:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.741 05:09:54 -- accel/accel.sh@42 -- # jq -r . 00:07:38.741 [2024-11-19 05:09:55.009486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.741 [2024-11-19 05:09:55.009556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660990 ] 00:07:38.741 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.741 [2024-11-19 05:09:55.076992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.741 [2024-11-19 05:09:55.111765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.118 05:09:56 -- accel/accel.sh@18 -- # out=' 00:07:40.118 SPDK Configuration: 00:07:40.118 Core mask: 0x1 00:07:40.118 00:07:40.118 Accel Perf Configuration: 00:07:40.118 Workload Type: compare 00:07:40.118 Transfer size: 4096 bytes 00:07:40.118 Vector count 1 00:07:40.118 Module: software 00:07:40.118 Queue depth: 32 00:07:40.118 Allocate depth: 32 00:07:40.118 # threads/core: 1 00:07:40.118 Run time: 1 seconds 00:07:40.118 Verify: Yes 00:07:40.118 00:07:40.118 Running for 1 seconds... 00:07:40.118 00:07:40.118 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.118 ------------------------------------------------------------------------------------ 00:07:40.118 0,0 648224/s 2532 MiB/s 0 0 00:07:40.118 ==================================================================================== 00:07:40.118 Total 648224/s 2532 MiB/s 0 0' 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:40.118 05:09:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:40.118 05:09:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.118 05:09:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.118 05:09:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.118 05:09:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.118 05:09:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.118 05:09:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.118 05:09:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.118 05:09:56 -- accel/accel.sh@42 -- # jq -r . 00:07:40.118 [2024-11-19 05:09:56.303713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.118 [2024-11-19 05:09:56.303777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661263 ] 00:07:40.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.118 [2024-11-19 05:09:56.371643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.118 [2024-11-19 05:09:56.405504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val= 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val= 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val=0x1 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val= 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val= 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val=compare 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.118 05:09:56 -- accel/accel.sh@21 -- # val= 00:07:40.118 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.118 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val=software 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val=32 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val=32 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val=1 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val=Yes 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val= 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.119 05:09:56 -- accel/accel.sh@21 -- # val= 00:07:40.119 05:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.119 05:09:56 -- accel/accel.sh@20 -- # read -r var val 00:07:41.062 05:09:57 -- accel/accel.sh@21 -- # val= 00:07:41.062 05:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # IFS=: 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # read -r var val 00:07:41.062 05:09:57 -- accel/accel.sh@21 -- # val= 00:07:41.062 05:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # IFS=: 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # read -r var val 00:07:41.062 05:09:57 -- accel/accel.sh@21 -- # val= 00:07:41.062 05:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # IFS=: 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # read -r var val 00:07:41.062 05:09:57 -- accel/accel.sh@21 -- # val= 00:07:41.062 05:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # IFS=: 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # read -r var val 00:07:41.062 05:09:57 -- accel/accel.sh@21 -- # val= 00:07:41.062 05:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # IFS=: 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # read -r var val 00:07:41.062 05:09:57 -- accel/accel.sh@21 -- # val= 00:07:41.062 05:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # IFS=: 00:07:41.062 05:09:57 -- accel/accel.sh@20 -- # read -r var val 00:07:41.062 05:09:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.062 05:09:57 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:41.062 05:09:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.062 00:07:41.062 real 0m2.587s 00:07:41.062 user 0m2.340s 00:07:41.062 sys 0m0.245s 00:07:41.062 05:09:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.062 05:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:41.062 ************************************ 00:07:41.062 END TEST accel_compare 00:07:41.062 ************************************ 00:07:41.062 05:09:57 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:41.062 05:09:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:41.062 05:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.062 05:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:41.062 ************************************ 00:07:41.062 START TEST accel_xor 00:07:41.062 ************************************ 00:07:41.062 05:09:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:41.062 05:09:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.062 05:09:57 -- accel/accel.sh@17 -- # local accel_module 00:07:41.062 05:09:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:41.062 05:09:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:41.063 05:09:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.063 05:09:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.063 05:09:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.063 05:09:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.063 05:09:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.063 05:09:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.063 05:09:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.063 05:09:57 -- accel/accel.sh@42 -- # jq -r . 00:07:41.322 [2024-11-19 05:09:57.639932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.322 [2024-11-19 05:09:57.639995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661544 ] 00:07:41.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.322 [2024-11-19 05:09:57.706897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.322 [2024-11-19 05:09:57.741433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.700 05:09:58 -- accel/accel.sh@18 -- # out=' 00:07:42.700 SPDK Configuration: 00:07:42.700 Core mask: 0x1 00:07:42.700 00:07:42.700 Accel Perf Configuration: 00:07:42.700 Workload Type: xor 00:07:42.700 Source buffers: 2 00:07:42.700 Transfer size: 4096 bytes 00:07:42.700 Vector count 1 00:07:42.700 Module: software 00:07:42.700 Queue depth: 32 00:07:42.700 Allocate depth: 32 00:07:42.700 # threads/core: 1 00:07:42.700 Run time: 1 seconds 00:07:42.700 Verify: Yes 00:07:42.700 00:07:42.700 Running for 1 seconds... 00:07:42.700 00:07:42.700 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.700 ------------------------------------------------------------------------------------ 00:07:42.700 0,0 496864/s 1940 MiB/s 0 0 00:07:42.700 ==================================================================================== 00:07:42.700 Total 496864/s 1940 MiB/s 0 0' 00:07:42.700 05:09:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:42.700 05:09:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:42.700 05:09:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.700 05:09:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.700 05:09:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.700 05:09:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.700 05:09:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.700 05:09:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.700 05:09:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.700 05:09:58 -- accel/accel.sh@42 -- # jq -r . 00:07:42.700 [2024-11-19 05:09:58.934760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.700 [2024-11-19 05:09:58.934824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661816 ] 00:07:42.700 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.700 [2024-11-19 05:09:59.002868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.700 [2024-11-19 05:09:59.036946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val= 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val= 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val=0x1 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val= 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val= 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val=xor 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val=2 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val= 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val=software 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val=32 00:07:42.700 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.700 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.700 05:09:59 -- accel/accel.sh@21 -- # val=32 00:07:42.701 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.701 05:09:59 -- accel/accel.sh@21 -- # val=1 00:07:42.701 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.701 05:09:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.701 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.701 05:09:59 -- accel/accel.sh@21 -- # val=Yes 00:07:42.701 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.701 05:09:59 -- accel/accel.sh@21 -- # val= 00:07:42.701 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.701 05:09:59 -- accel/accel.sh@21 -- # val= 00:07:42.701 05:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.701 05:09:59 -- accel/accel.sh@20 -- # read -r var val 00:07:44.079 05:10:00 -- accel/accel.sh@21 -- # val= 00:07:44.079 05:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # IFS=: 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # read -r var val 00:07:44.079 05:10:00 -- accel/accel.sh@21 -- # val= 00:07:44.079 05:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # IFS=: 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # read -r var val 00:07:44.079 05:10:00 -- accel/accel.sh@21 -- # val= 00:07:44.079 05:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # IFS=: 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # read -r var val 00:07:44.079 05:10:00 -- accel/accel.sh@21 -- # val= 00:07:44.079 05:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # IFS=: 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # read -r var val 00:07:44.079 05:10:00 -- accel/accel.sh@21 -- # val= 00:07:44.079 05:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # IFS=: 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # read -r var val 00:07:44.079 05:10:00 -- accel/accel.sh@21 -- # val= 00:07:44.079 05:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # IFS=: 00:07:44.079 05:10:00 -- accel/accel.sh@20 -- # read -r var val 00:07:44.079 05:10:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.079 05:10:00 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:44.079 05:10:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.079 00:07:44.079 real 0m2.591s 00:07:44.079 user 0m2.337s 00:07:44.079 sys 0m0.253s 00:07:44.079 05:10:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.079 05:10:00 -- common/autotest_common.sh@10 -- # set +x 00:07:44.079 ************************************ 00:07:44.079 END TEST accel_xor 00:07:44.079 ************************************ 00:07:44.079 05:10:00 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:44.079 05:10:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:44.079 05:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.079 05:10:00 -- common/autotest_common.sh@10 -- # set +x 00:07:44.079 ************************************ 00:07:44.079 START TEST accel_xor 00:07:44.079 ************************************ 00:07:44.079 05:10:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:44.079 05:10:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.079 05:10:00 -- accel/accel.sh@17 -- # local accel_module 00:07:44.079 05:10:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:44.079 05:10:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:44.079 05:10:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.079 05:10:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.079 05:10:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.079 05:10:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.079 05:10:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.079 05:10:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.079 05:10:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.079 05:10:00 -- accel/accel.sh@42 -- # jq -r . 00:07:44.079 [2024-11-19 05:10:00.272872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.079 [2024-11-19 05:10:00.272935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662058 ] 00:07:44.079 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.079 [2024-11-19 05:10:00.342125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.079 [2024-11-19 05:10:00.377521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.088 05:10:01 -- accel/accel.sh@18 -- # out=' 00:07:45.088 SPDK Configuration: 00:07:45.088 Core mask: 0x1 00:07:45.088 00:07:45.088 Accel Perf Configuration: 00:07:45.088 Workload Type: xor 00:07:45.088 Source buffers: 3 00:07:45.088 Transfer size: 4096 bytes 00:07:45.088 Vector count 1 00:07:45.088 Module: software 00:07:45.088 Queue depth: 32 00:07:45.088 Allocate depth: 32 00:07:45.088 # threads/core: 1 00:07:45.088 Run time: 1 seconds 00:07:45.088 Verify: Yes 00:07:45.088 00:07:45.088 Running for 1 seconds... 00:07:45.088 00:07:45.088 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.088 ------------------------------------------------------------------------------------ 00:07:45.088 0,0 463136/s 1809 MiB/s 0 0 00:07:45.088 ==================================================================================== 00:07:45.088 Total 463136/s 1809 MiB/s 0 0' 00:07:45.088 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.088 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.088 05:10:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:45.088 05:10:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:45.088 05:10:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.088 05:10:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.088 05:10:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.088 05:10:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.088 05:10:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.088 05:10:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.088 05:10:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.088 05:10:01 -- accel/accel.sh@42 -- # jq -r . 00:07:45.088 [2024-11-19 05:10:01.569209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.088 [2024-11-19 05:10:01.569274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662205 ] 00:07:45.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.348 [2024-11-19 05:10:01.639115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.348 [2024-11-19 05:10:01.675057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val= 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val= 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=0x1 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val= 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val= 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=xor 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=3 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val= 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=software 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=32 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=32 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=1 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val=Yes 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val= 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.348 05:10:01 -- accel/accel.sh@21 -- # val= 00:07:45.348 05:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.348 05:10:01 -- accel/accel.sh@20 -- # read -r var val 00:07:46.286 05:10:02 -- accel/accel.sh@21 -- # val= 00:07:46.286 05:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.286 05:10:02 -- accel/accel.sh@21 -- # val= 00:07:46.286 05:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.286 05:10:02 -- accel/accel.sh@21 -- # val= 00:07:46.286 05:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.286 05:10:02 -- accel/accel.sh@21 -- # val= 00:07:46.286 05:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.286 05:10:02 -- accel/accel.sh@21 -- # val= 00:07:46.286 05:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.286 05:10:02 -- accel/accel.sh@21 -- # val= 00:07:46.286 05:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.286 05:10:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.286 05:10:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.286 05:10:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:46.286 05:10:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.286 00:07:46.286 real 0m2.595s 00:07:46.286 user 0m2.341s 00:07:46.286 sys 0m0.254s 00:07:46.286 05:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.286 05:10:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.286 ************************************ 00:07:46.286 END TEST accel_xor 00:07:46.286 ************************************ 00:07:46.545 05:10:02 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:46.545 05:10:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:46.545 05:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.545 05:10:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.545 ************************************ 00:07:46.545 START TEST accel_dif_verify 00:07:46.545 ************************************ 00:07:46.545 05:10:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:46.545 05:10:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.545 05:10:02 -- accel/accel.sh@17 -- # local accel_module 00:07:46.545 05:10:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:46.545 05:10:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:46.545 05:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.545 05:10:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.545 05:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.545 05:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.545 05:10:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.545 05:10:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.545 05:10:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.545 05:10:02 -- accel/accel.sh@42 -- # jq -r . 00:07:46.545 [2024-11-19 05:10:02.912579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.545 [2024-11-19 05:10:02.912662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662410 ] 00:07:46.545 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.545 [2024-11-19 05:10:02.983623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.545 [2024-11-19 05:10:03.018758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.924 05:10:04 -- accel/accel.sh@18 -- # out=' 00:07:47.924 SPDK Configuration: 00:07:47.924 Core mask: 0x1 00:07:47.924 00:07:47.924 Accel Perf Configuration: 00:07:47.924 Workload Type: dif_verify 00:07:47.924 Vector size: 4096 bytes 00:07:47.924 Transfer size: 4096 bytes 00:07:47.924 Block size: 512 bytes 00:07:47.924 Metadata size: 8 bytes 00:07:47.924 Vector count 1 00:07:47.924 Module: software 00:07:47.924 Queue depth: 32 00:07:47.924 Allocate depth: 32 00:07:47.924 # threads/core: 1 00:07:47.924 Run time: 1 seconds 00:07:47.924 Verify: No 00:07:47.924 00:07:47.924 Running for 1 seconds... 00:07:47.924 00:07:47.924 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.924 ------------------------------------------------------------------------------------ 00:07:47.924 0,0 139616/s 553 MiB/s 0 0 00:07:47.924 ==================================================================================== 00:07:47.924 Total 139616/s 545 MiB/s 0 0' 00:07:47.924 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.924 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.924 05:10:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:47.925 05:10:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:47.925 05:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.925 05:10:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.925 05:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.925 05:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.925 05:10:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.925 05:10:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.925 05:10:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.925 05:10:04 -- accel/accel.sh@42 -- # jq -r . 00:07:47.925 [2024-11-19 05:10:04.210713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.925 [2024-11-19 05:10:04.210778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662678 ] 00:07:47.925 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.925 [2024-11-19 05:10:04.279833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.925 [2024-11-19 05:10:04.313819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val= 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val= 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val=0x1 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val= 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val= 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val=dif_verify 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val= 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val=software 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val=32 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val=32 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val=1 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val=No 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val= 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.925 05:10:04 -- accel/accel.sh@21 -- # val= 00:07:47.925 05:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.925 05:10:04 -- accel/accel.sh@20 -- # read -r var val 00:07:49.304 05:10:05 -- accel/accel.sh@21 -- # val= 00:07:49.304 05:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.304 05:10:05 -- accel/accel.sh@21 -- # val= 00:07:49.304 05:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.304 05:10:05 -- accel/accel.sh@21 -- # val= 00:07:49.304 05:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.304 05:10:05 -- accel/accel.sh@21 -- # val= 00:07:49.304 05:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.304 05:10:05 -- accel/accel.sh@21 -- # val= 00:07:49.304 05:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.304 05:10:05 -- accel/accel.sh@21 -- # val= 00:07:49.304 05:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.304 05:10:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.304 05:10:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.304 05:10:05 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:49.304 05:10:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.304 00:07:49.304 real 0m2.595s 00:07:49.304 user 0m2.343s 00:07:49.304 sys 0m0.251s 00:07:49.304 05:10:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.304 05:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:49.304 ************************************ 00:07:49.304 END TEST accel_dif_verify 00:07:49.304 ************************************ 00:07:49.304 05:10:05 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:49.304 05:10:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:49.304 05:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.304 05:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:49.304 ************************************ 00:07:49.304 START TEST accel_dif_generate 00:07:49.304 ************************************ 00:07:49.304 05:10:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:49.304 05:10:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.304 05:10:05 -- accel/accel.sh@17 -- # local accel_module 00:07:49.304 05:10:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:49.304 05:10:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:49.304 05:10:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.304 05:10:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.304 05:10:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.304 05:10:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.304 05:10:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.304 05:10:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.304 05:10:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.304 05:10:05 -- accel/accel.sh@42 -- # jq -r . 00:07:49.304 [2024-11-19 05:10:05.549700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.304 [2024-11-19 05:10:05.549763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662964 ] 00:07:49.304 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.304 [2024-11-19 05:10:05.619001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.304 [2024-11-19 05:10:05.653942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.682 05:10:06 -- accel/accel.sh@18 -- # out=' 00:07:50.682 SPDK Configuration: 00:07:50.682 Core mask: 0x1 00:07:50.682 00:07:50.682 Accel Perf Configuration: 00:07:50.682 Workload Type: dif_generate 00:07:50.682 Vector size: 4096 bytes 00:07:50.682 Transfer size: 4096 bytes 00:07:50.682 Block size: 512 bytes 00:07:50.682 Metadata size: 8 bytes 00:07:50.682 Vector count 1 00:07:50.682 Module: software 00:07:50.682 Queue depth: 32 00:07:50.682 Allocate depth: 32 00:07:50.682 # threads/core: 1 00:07:50.682 Run time: 1 seconds 00:07:50.682 Verify: No 00:07:50.682 00:07:50.682 Running for 1 seconds... 00:07:50.682 00:07:50.682 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.682 ------------------------------------------------------------------------------------ 00:07:50.682 0,0 167808/s 665 MiB/s 0 0 00:07:50.682 ==================================================================================== 00:07:50.682 Total 167808/s 655 MiB/s 0 0' 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:50.682 05:10:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:50.682 05:10:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.682 05:10:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.682 05:10:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.682 05:10:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.682 05:10:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.682 05:10:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.682 05:10:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.682 05:10:06 -- accel/accel.sh@42 -- # jq -r . 00:07:50.682 [2024-11-19 05:10:06.845322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.682 [2024-11-19 05:10:06.845385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663230 ] 00:07:50.682 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.682 [2024-11-19 05:10:06.913464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.682 [2024-11-19 05:10:06.947420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val= 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val= 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val=0x1 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val= 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val= 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val=dif_generate 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val= 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val=software 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val=32 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val=32 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val=1 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val=No 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val= 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.682 05:10:06 -- accel/accel.sh@21 -- # val= 00:07:50.682 05:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.682 05:10:06 -- accel/accel.sh@20 -- # read -r var val 00:07:51.620 05:10:08 -- accel/accel.sh@21 -- # val= 00:07:51.620 05:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.620 05:10:08 -- accel/accel.sh@21 -- # val= 00:07:51.620 05:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.620 05:10:08 -- accel/accel.sh@21 -- # val= 00:07:51.620 05:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.620 05:10:08 -- accel/accel.sh@21 -- # val= 00:07:51.620 05:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.620 05:10:08 -- accel/accel.sh@21 -- # val= 00:07:51.620 05:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.620 05:10:08 -- accel/accel.sh@21 -- # val= 00:07:51.620 05:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.620 05:10:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.620 05:10:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.620 05:10:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:51.620 05:10:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.620 00:07:51.620 real 0m2.590s 00:07:51.620 user 0m2.346s 00:07:51.620 sys 0m0.244s 00:07:51.620 05:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.620 05:10:08 -- common/autotest_common.sh@10 -- # set +x 00:07:51.620 ************************************ 00:07:51.620 END TEST accel_dif_generate 00:07:51.620 ************************************ 00:07:51.620 05:10:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:51.620 05:10:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:51.620 05:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.620 05:10:08 -- common/autotest_common.sh@10 -- # set +x 00:07:51.620 ************************************ 00:07:51.620 START TEST accel_dif_generate_copy 00:07:51.620 ************************************ 00:07:51.620 05:10:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:51.620 05:10:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.620 05:10:08 -- accel/accel.sh@17 -- # local accel_module 00:07:51.620 05:10:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:51.620 05:10:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:51.620 05:10:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.620 05:10:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.620 05:10:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.620 05:10:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.620 05:10:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.620 05:10:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.620 05:10:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.620 05:10:08 -- accel/accel.sh@42 -- # jq -r . 00:07:51.879 [2024-11-19 05:10:08.184623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.879 [2024-11-19 05:10:08.184688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663519 ] 00:07:51.879 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.879 [2024-11-19 05:10:08.253384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.879 [2024-11-19 05:10:08.288717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.260 05:10:09 -- accel/accel.sh@18 -- # out=' 00:07:53.260 SPDK Configuration: 00:07:53.260 Core mask: 0x1 00:07:53.260 00:07:53.260 Accel Perf Configuration: 00:07:53.260 Workload Type: dif_generate_copy 00:07:53.260 Vector size: 4096 bytes 00:07:53.260 Transfer size: 4096 bytes 00:07:53.260 Vector count 1 00:07:53.260 Module: software 00:07:53.260 Queue depth: 32 00:07:53.260 Allocate depth: 32 00:07:53.260 # threads/core: 1 00:07:53.260 Run time: 1 seconds 00:07:53.260 Verify: No 00:07:53.260 00:07:53.260 Running for 1 seconds... 00:07:53.260 00:07:53.260 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.260 ------------------------------------------------------------------------------------ 00:07:53.260 0,0 128736/s 510 MiB/s 0 0 00:07:53.260 ==================================================================================== 00:07:53.260 Total 128736/s 502 MiB/s 0 0' 00:07:53.260 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.260 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.260 05:10:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:53.260 05:10:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:53.260 05:10:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.260 05:10:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.260 05:10:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.261 05:10:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.261 05:10:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.261 05:10:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.261 05:10:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.261 05:10:09 -- accel/accel.sh@42 -- # jq -r . 00:07:53.261 [2024-11-19 05:10:09.480743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.261 [2024-11-19 05:10:09.480810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663749 ] 00:07:53.261 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.261 [2024-11-19 05:10:09.549508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.261 [2024-11-19 05:10:09.584589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val= 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val= 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val=0x1 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val= 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val= 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val= 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val=software 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val=32 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val=32 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val=1 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val=No 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val= 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.261 05:10:09 -- accel/accel.sh@21 -- # val= 00:07:53.261 05:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.261 05:10:09 -- accel/accel.sh@20 -- # read -r var val 00:07:54.199 05:10:10 -- accel/accel.sh@21 -- # val= 00:07:54.199 05:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # IFS=: 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # read -r var val 00:07:54.200 05:10:10 -- accel/accel.sh@21 -- # val= 00:07:54.200 05:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # IFS=: 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # read -r var val 00:07:54.200 05:10:10 -- accel/accel.sh@21 -- # val= 00:07:54.200 05:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # IFS=: 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # read -r var val 00:07:54.200 05:10:10 -- accel/accel.sh@21 -- # val= 00:07:54.200 05:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # IFS=: 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # read -r var val 00:07:54.200 05:10:10 -- accel/accel.sh@21 -- # val= 00:07:54.200 05:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # IFS=: 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # read -r var val 00:07:54.200 05:10:10 -- accel/accel.sh@21 -- # val= 00:07:54.200 05:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # IFS=: 00:07:54.200 05:10:10 -- accel/accel.sh@20 -- # read -r var val 00:07:54.200 05:10:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.200 05:10:10 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:54.200 05:10:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.200 00:07:54.200 real 0m2.594s 00:07:54.200 user 0m2.350s 00:07:54.200 sys 0m0.244s 00:07:54.200 05:10:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.200 05:10:10 -- common/autotest_common.sh@10 -- # set +x 00:07:54.200 ************************************ 00:07:54.200 END TEST accel_dif_generate_copy 00:07:54.200 ************************************ 00:07:54.459 05:10:10 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:54.460 05:10:10 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:54.460 05:10:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:54.460 05:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.460 05:10:10 -- common/autotest_common.sh@10 -- # set +x 00:07:54.460 ************************************ 00:07:54.460 START TEST accel_comp 00:07:54.460 ************************************ 00:07:54.460 05:10:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:54.460 05:10:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.460 05:10:10 -- accel/accel.sh@17 -- # local accel_module 00:07:54.460 05:10:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:54.460 05:10:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:54.460 05:10:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.460 05:10:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.460 05:10:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.460 05:10:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.460 05:10:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.460 05:10:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.460 05:10:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.460 05:10:10 -- accel/accel.sh@42 -- # jq -r . 00:07:54.460 [2024-11-19 05:10:10.828211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.460 [2024-11-19 05:10:10.828280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663936 ] 00:07:54.460 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.460 [2024-11-19 05:10:10.902173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.460 [2024-11-19 05:10:10.937941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.839 05:10:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:55.839 00:07:55.839 SPDK Configuration: 00:07:55.839 Core mask: 0x1 00:07:55.839 00:07:55.839 Accel Perf Configuration: 00:07:55.839 Workload Type: compress 00:07:55.839 Transfer size: 4096 bytes 00:07:55.839 Vector count 1 00:07:55.839 Module: software 00:07:55.839 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:55.839 Queue depth: 32 00:07:55.839 Allocate depth: 32 00:07:55.839 # threads/core: 1 00:07:55.839 Run time: 1 seconds 00:07:55.839 Verify: No 00:07:55.839 00:07:55.839 Running for 1 seconds... 00:07:55.839 00:07:55.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.839 ------------------------------------------------------------------------------------ 00:07:55.839 0,0 65504/s 272 MiB/s 0 0 00:07:55.839 ==================================================================================== 00:07:55.839 Total 65504/s 255 MiB/s 0 0' 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:55.839 05:10:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:55.839 05:10:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.839 05:10:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.839 05:10:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.839 05:10:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.839 05:10:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.839 05:10:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.839 05:10:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.839 05:10:12 -- accel/accel.sh@42 -- # jq -r . 00:07:55.839 [2024-11-19 05:10:12.132025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.839 [2024-11-19 05:10:12.132090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664096 ] 00:07:55.839 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.839 [2024-11-19 05:10:12.202624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.839 [2024-11-19 05:10:12.239908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=0x1 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=compress 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=software 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=32 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=32 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=1 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val=No 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.839 05:10:12 -- accel/accel.sh@21 -- # val= 00:07:55.839 05:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.839 05:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:57.219 05:10:13 -- accel/accel.sh@21 -- # val= 00:07:57.219 05:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # IFS=: 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # read -r var val 00:07:57.219 05:10:13 -- accel/accel.sh@21 -- # val= 00:07:57.219 05:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # IFS=: 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # read -r var val 00:07:57.219 05:10:13 -- accel/accel.sh@21 -- # val= 00:07:57.219 05:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # IFS=: 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # read -r var val 00:07:57.219 05:10:13 -- accel/accel.sh@21 -- # val= 00:07:57.219 05:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # IFS=: 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # read -r var val 00:07:57.219 05:10:13 -- accel/accel.sh@21 -- # val= 00:07:57.219 05:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # IFS=: 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # read -r var val 00:07:57.219 05:10:13 -- accel/accel.sh@21 -- # val= 00:07:57.219 05:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # IFS=: 00:07:57.219 05:10:13 -- accel/accel.sh@20 -- # read -r var val 00:07:57.219 05:10:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:57.219 05:10:13 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:57.219 05:10:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.219 00:07:57.219 real 0m2.610s 00:07:57.219 user 0m2.345s 00:07:57.219 sys 0m0.264s 00:07:57.219 05:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.219 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.219 ************************************ 00:07:57.219 END TEST accel_comp 00:07:57.219 ************************************ 00:07:57.219 05:10:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:57.219 05:10:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:57.219 05:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.219 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.219 ************************************ 00:07:57.219 START TEST accel_decomp 00:07:57.219 ************************************ 00:07:57.219 05:10:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:57.219 05:10:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.219 05:10:13 -- accel/accel.sh@17 -- # local accel_module 00:07:57.219 05:10:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:57.219 05:10:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:57.219 05:10:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.219 05:10:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.219 05:10:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.219 05:10:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.219 05:10:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.219 05:10:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.219 05:10:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.219 05:10:13 -- accel/accel.sh@42 -- # jq -r . 00:07:57.219 [2024-11-19 05:10:13.478667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.219 [2024-11-19 05:10:13.478732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664380 ] 00:07:57.219 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.219 [2024-11-19 05:10:13.547803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.219 [2024-11-19 05:10:13.582642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.599 05:10:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:58.599 00:07:58.599 SPDK Configuration: 00:07:58.599 Core mask: 0x1 00:07:58.599 00:07:58.599 Accel Perf Configuration: 00:07:58.599 Workload Type: decompress 00:07:58.599 Transfer size: 4096 bytes 00:07:58.599 Vector count 1 00:07:58.599 Module: software 00:07:58.599 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:58.599 Queue depth: 32 00:07:58.599 Allocate depth: 32 00:07:58.599 # threads/core: 1 00:07:58.599 Run time: 1 seconds 00:07:58.599 Verify: Yes 00:07:58.599 00:07:58.599 Running for 1 seconds... 00:07:58.599 00:07:58.599 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.599 ------------------------------------------------------------------------------------ 00:07:58.599 0,0 88032/s 162 MiB/s 0 0 00:07:58.599 ==================================================================================== 00:07:58.599 Total 88032/s 343 MiB/s 0 0' 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:58.599 05:10:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:58.599 05:10:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.599 05:10:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.599 05:10:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.599 05:10:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.599 05:10:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.599 05:10:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.599 05:10:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.599 05:10:14 -- accel/accel.sh@42 -- # jq -r . 00:07:58.599 [2024-11-19 05:10:14.774227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.599 [2024-11-19 05:10:14.774294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664648 ] 00:07:58.599 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.599 [2024-11-19 05:10:14.842034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.599 [2024-11-19 05:10:14.875973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=0x1 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=decompress 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=software 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=32 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=32 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=1 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val=Yes 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.599 05:10:14 -- accel/accel.sh@21 -- # val= 00:07:58.599 05:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.599 05:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:59.536 05:10:16 -- accel/accel.sh@21 -- # val= 00:07:59.536 05:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.536 05:10:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.536 05:10:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.536 05:10:16 -- accel/accel.sh@21 -- # val= 00:07:59.536 05:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 05:10:16 -- accel/accel.sh@21 -- # val= 00:07:59.537 05:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 05:10:16 -- accel/accel.sh@21 -- # val= 00:07:59.537 05:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 05:10:16 -- accel/accel.sh@21 -- # val= 00:07:59.537 05:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 05:10:16 -- accel/accel.sh@21 -- # val= 00:07:59.537 05:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 05:10:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 05:10:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.537 05:10:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:59.537 05:10:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.537 00:07:59.537 real 0m2.592s 00:07:59.537 user 0m2.349s 00:07:59.537 sys 0m0.242s 00:07:59.537 05:10:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.537 05:10:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.537 ************************************ 00:07:59.537 END TEST accel_decomp 00:07:59.537 ************************************ 00:07:59.537 05:10:16 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.537 05:10:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:59.537 05:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.537 05:10:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.537 ************************************ 00:07:59.537 START TEST accel_decmop_full 00:07:59.537 ************************************ 00:07:59.537 05:10:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.537 05:10:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.537 05:10:16 -- accel/accel.sh@17 -- # local accel_module 00:07:59.537 05:10:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.537 05:10:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.537 05:10:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.537 05:10:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.537 05:10:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.537 05:10:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.537 05:10:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.537 05:10:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.537 05:10:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.537 05:10:16 -- accel/accel.sh@42 -- # jq -r . 00:07:59.796 [2024-11-19 05:10:16.115382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.796 [2024-11-19 05:10:16.115461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664929 ] 00:07:59.796 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.796 [2024-11-19 05:10:16.185297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.796 [2024-11-19 05:10:16.220289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.175 05:10:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:01.175 00:08:01.175 SPDK Configuration: 00:08:01.175 Core mask: 0x1 00:08:01.175 00:08:01.175 Accel Perf Configuration: 00:08:01.175 Workload Type: decompress 00:08:01.175 Transfer size: 111250 bytes 00:08:01.175 Vector count 1 00:08:01.175 Module: software 00:08:01.175 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:01.175 Queue depth: 32 00:08:01.175 Allocate depth: 32 00:08:01.175 # threads/core: 1 00:08:01.175 Run time: 1 seconds 00:08:01.175 Verify: Yes 00:08:01.175 00:08:01.175 Running for 1 seconds... 00:08:01.175 00:08:01.175 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.175 ------------------------------------------------------------------------------------ 00:08:01.175 0,0 5760/s 237 MiB/s 0 0 00:08:01.175 ==================================================================================== 00:08:01.175 Total 5760/s 611 MiB/s 0 0' 00:08:01.175 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.175 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.175 05:10:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:01.175 05:10:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:01.175 05:10:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.175 05:10:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.175 05:10:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.175 05:10:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.175 05:10:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.175 05:10:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.175 05:10:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.175 05:10:17 -- accel/accel.sh@42 -- # jq -r . 00:08:01.175 [2024-11-19 05:10:17.425306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.175 [2024-11-19 05:10:17.425372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665203 ] 00:08:01.175 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.176 [2024-11-19 05:10:17.492395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.176 [2024-11-19 05:10:17.526290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=0x1 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=decompress 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=software 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@23 -- # accel_module=software 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=32 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=32 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=1 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val=Yes 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.176 05:10:17 -- accel/accel.sh@21 -- # val= 00:08:01.176 05:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.176 05:10:17 -- accel/accel.sh@20 -- # read -r var val 00:08:02.556 05:10:18 -- accel/accel.sh@21 -- # val= 00:08:02.556 05:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # IFS=: 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # read -r var val 00:08:02.556 05:10:18 -- accel/accel.sh@21 -- # val= 00:08:02.556 05:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # IFS=: 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # read -r var val 00:08:02.556 05:10:18 -- accel/accel.sh@21 -- # val= 00:08:02.556 05:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # IFS=: 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # read -r var val 00:08:02.556 05:10:18 -- accel/accel.sh@21 -- # val= 00:08:02.556 05:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # IFS=: 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # read -r var val 00:08:02.556 05:10:18 -- accel/accel.sh@21 -- # val= 00:08:02.556 05:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # IFS=: 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # read -r var val 00:08:02.556 05:10:18 -- accel/accel.sh@21 -- # val= 00:08:02.556 05:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # IFS=: 00:08:02.556 05:10:18 -- accel/accel.sh@20 -- # read -r var val 00:08:02.556 05:10:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:02.556 05:10:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:02.556 05:10:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.556 00:08:02.556 real 0m2.614s 00:08:02.556 user 0m2.357s 00:08:02.556 sys 0m0.255s 00:08:02.556 05:10:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.556 05:10:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.556 ************************************ 00:08:02.556 END TEST accel_decmop_full 00:08:02.556 ************************************ 00:08:02.556 05:10:18 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:02.556 05:10:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:02.556 05:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.556 05:10:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.556 ************************************ 00:08:02.556 START TEST accel_decomp_mcore 00:08:02.556 ************************************ 00:08:02.556 05:10:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:02.556 05:10:18 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.556 05:10:18 -- accel/accel.sh@17 -- # local accel_module 00:08:02.556 05:10:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:02.557 05:10:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:02.557 05:10:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.557 05:10:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.557 05:10:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.557 05:10:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.557 05:10:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.557 05:10:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.557 05:10:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.557 05:10:18 -- accel/accel.sh@42 -- # jq -r . 00:08:02.557 [2024-11-19 05:10:18.772839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.557 [2024-11-19 05:10:18.772908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665454 ] 00:08:02.557 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.557 [2024-11-19 05:10:18.843286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.557 [2024-11-19 05:10:18.880868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.557 [2024-11-19 05:10:18.880965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.557 [2024-11-19 05:10:18.881028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.557 [2024-11-19 05:10:18.881030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.536 05:10:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:03.536 00:08:03.536 SPDK Configuration: 00:08:03.536 Core mask: 0xf 00:08:03.536 00:08:03.536 Accel Perf Configuration: 00:08:03.536 Workload Type: decompress 00:08:03.536 Transfer size: 4096 bytes 00:08:03.536 Vector count 1 00:08:03.536 Module: software 00:08:03.536 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.536 Queue depth: 32 00:08:03.536 Allocate depth: 32 00:08:03.536 # threads/core: 1 00:08:03.536 Run time: 1 seconds 00:08:03.536 Verify: Yes 00:08:03.536 00:08:03.536 Running for 1 seconds... 00:08:03.536 00:08:03.536 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:03.536 ------------------------------------------------------------------------------------ 00:08:03.536 0,0 69888/s 128 MiB/s 0 0 00:08:03.536 3,0 73984/s 136 MiB/s 0 0 00:08:03.536 2,0 73696/s 135 MiB/s 0 0 00:08:03.536 1,0 73632/s 135 MiB/s 0 0 00:08:03.536 ==================================================================================== 00:08:03.536 Total 291200/s 1137 MiB/s 0 0' 00:08:03.536 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.536 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.536 05:10:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:03.536 05:10:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:03.536 05:10:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.536 05:10:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.536 05:10:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.536 05:10:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.536 05:10:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.536 05:10:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.536 05:10:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.536 05:10:20 -- accel/accel.sh@42 -- # jq -r . 00:08:03.536 [2024-11-19 05:10:20.084316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.536 [2024-11-19 05:10:20.084388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665608 ] 00:08:03.796 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.796 [2024-11-19 05:10:20.156759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.796 [2024-11-19 05:10:20.195719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.796 [2024-11-19 05:10:20.195818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.796 [2024-11-19 05:10:20.195908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.796 [2024-11-19 05:10:20.195909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=0xf 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=decompress 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=software 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=32 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=32 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=1 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val=Yes 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:03.796 05:10:20 -- accel/accel.sh@21 -- # val= 00:08:03.796 05:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # IFS=: 00:08:03.796 05:10:20 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@21 -- # val= 00:08:05.177 05:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # IFS=: 00:08:05.177 05:10:21 -- accel/accel.sh@20 -- # read -r var val 00:08:05.177 05:10:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:05.177 05:10:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:05.177 05:10:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.177 00:08:05.177 real 0m2.632s 00:08:05.177 user 0m9.021s 00:08:05.177 sys 0m0.276s 00:08:05.177 05:10:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.177 05:10:21 -- common/autotest_common.sh@10 -- # set +x 00:08:05.177 ************************************ 00:08:05.177 END TEST accel_decomp_mcore 00:08:05.177 ************************************ 00:08:05.177 05:10:21 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.177 05:10:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:05.177 05:10:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.177 05:10:21 -- common/autotest_common.sh@10 -- # set +x 00:08:05.177 ************************************ 00:08:05.177 START TEST accel_decomp_full_mcore 00:08:05.177 ************************************ 00:08:05.177 05:10:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.177 05:10:21 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.177 05:10:21 -- accel/accel.sh@17 -- # local accel_module 00:08:05.177 05:10:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.177 05:10:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.177 05:10:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.177 05:10:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.177 05:10:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.177 05:10:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.177 05:10:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.177 05:10:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.177 05:10:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.177 05:10:21 -- accel/accel.sh@42 -- # jq -r . 00:08:05.177 [2024-11-19 05:10:21.455047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.177 [2024-11-19 05:10:21.455129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665820 ] 00:08:05.177 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.177 [2024-11-19 05:10:21.525417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.177 [2024-11-19 05:10:21.563108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.177 [2024-11-19 05:10:21.563206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.177 [2024-11-19 05:10:21.563294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.177 [2024-11-19 05:10:21.563296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.562 05:10:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:06.562 00:08:06.562 SPDK Configuration: 00:08:06.562 Core mask: 0xf 00:08:06.562 00:08:06.562 Accel Perf Configuration: 00:08:06.562 Workload Type: decompress 00:08:06.562 Transfer size: 111250 bytes 00:08:06.562 Vector count 1 00:08:06.562 Module: software 00:08:06.562 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.562 Queue depth: 32 00:08:06.562 Allocate depth: 32 00:08:06.562 # threads/core: 1 00:08:06.562 Run time: 1 seconds 00:08:06.562 Verify: Yes 00:08:06.562 00:08:06.562 Running for 1 seconds... 00:08:06.562 00:08:06.562 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.563 ------------------------------------------------------------------------------------ 00:08:06.563 0,0 5696/s 235 MiB/s 0 0 00:08:06.563 3,0 5696/s 235 MiB/s 0 0 00:08:06.563 2,0 5696/s 235 MiB/s 0 0 00:08:06.563 1,0 5696/s 235 MiB/s 0 0 00:08:06.563 ==================================================================================== 00:08:06.563 Total 22784/s 2417 MiB/s 0 0' 00:08:06.563 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.563 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.563 05:10:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.564 05:10:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.564 05:10:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.564 05:10:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.564 05:10:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.564 05:10:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.564 05:10:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.564 05:10:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.564 05:10:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.564 05:10:22 -- accel/accel.sh@42 -- # jq -r . 00:08:06.564 [2024-11-19 05:10:22.772586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.564 [2024-11-19 05:10:22.772653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666072 ] 00:08:06.564 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.564 [2024-11-19 05:10:22.841777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.564 [2024-11-19 05:10:22.878862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.564 [2024-11-19 05:10:22.878956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.564 [2024-11-19 05:10:22.879043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.564 [2024-11-19 05:10:22.879045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.564 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.564 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.564 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.564 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.564 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.564 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.565 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.565 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.565 05:10:22 -- accel/accel.sh@21 -- # val=0xf 00:08:06.565 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.565 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.565 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.565 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.565 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.565 05:10:22 -- accel/accel.sh@21 -- # val=decompress 00:08:06.565 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.565 05:10:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:06.565 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.566 05:10:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:06.566 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.566 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.566 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.566 05:10:22 -- accel/accel.sh@21 -- # val=software 00:08:06.566 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.566 05:10:22 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.566 05:10:22 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.566 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.566 05:10:22 -- accel/accel.sh@21 -- # val=32 00:08:06.566 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.566 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.567 05:10:22 -- accel/accel.sh@21 -- # val=32 00:08:06.567 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.567 05:10:22 -- accel/accel.sh@21 -- # val=1 00:08:06.567 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.567 05:10:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.567 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.567 05:10:22 -- accel/accel.sh@21 -- # val=Yes 00:08:06.567 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.567 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.567 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.567 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.567 05:10:22 -- accel/accel.sh@21 -- # val= 00:08:06.568 05:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.568 05:10:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.568 05:10:22 -- accel/accel.sh@20 -- # read -r var val 00:08:07.509 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.509 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.509 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.509 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.509 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.509 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.509 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.509 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.509 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.509 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.509 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.510 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.510 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.510 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.510 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.510 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.510 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.510 05:10:24 -- accel/accel.sh@21 -- # val= 00:08:07.510 05:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.510 05:10:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.510 05:10:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:07.510 05:10:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:07.510 05:10:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.510 00:08:07.510 real 0m2.644s 00:08:07.510 user 0m9.086s 00:08:07.510 sys 0m0.270s 00:08:07.510 05:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.510 05:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 ************************************ 00:08:07.510 END TEST accel_decomp_full_mcore 00:08:07.510 ************************************ 00:08:07.769 05:10:24 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.769 05:10:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:07.769 05:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.769 05:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:07.769 ************************************ 00:08:07.769 START TEST accel_decomp_mthread 00:08:07.769 ************************************ 00:08:07.769 05:10:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.769 05:10:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.769 05:10:24 -- accel/accel.sh@17 -- # local accel_module 00:08:07.769 05:10:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.769 05:10:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.769 05:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.769 05:10:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.769 05:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.769 05:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.769 05:10:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.769 05:10:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.769 05:10:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.769 05:10:24 -- accel/accel.sh@42 -- # jq -r . 00:08:07.769 [2024-11-19 05:10:24.147264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.769 [2024-11-19 05:10:24.147336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666358 ] 00:08:07.769 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.769 [2024-11-19 05:10:24.217196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.769 [2024-11-19 05:10:24.251662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.147 05:10:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:09.147 00:08:09.147 SPDK Configuration: 00:08:09.147 Core mask: 0x1 00:08:09.147 00:08:09.147 Accel Perf Configuration: 00:08:09.147 Workload Type: decompress 00:08:09.147 Transfer size: 4096 bytes 00:08:09.147 Vector count 1 00:08:09.147 Module: software 00:08:09.147 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:09.147 Queue depth: 32 00:08:09.147 Allocate depth: 32 00:08:09.147 # threads/core: 2 00:08:09.147 Run time: 1 seconds 00:08:09.147 Verify: Yes 00:08:09.147 00:08:09.147 Running for 1 seconds... 00:08:09.147 00:08:09.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:09.147 ------------------------------------------------------------------------------------ 00:08:09.147 0,1 44480/s 81 MiB/s 0 0 00:08:09.147 0,0 44320/s 81 MiB/s 0 0 00:08:09.147 ==================================================================================== 00:08:09.147 Total 88800/s 346 MiB/s 0 0' 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:09.147 05:10:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:09.147 05:10:25 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.147 05:10:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.147 05:10:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.147 05:10:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.147 05:10:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.147 05:10:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.147 05:10:25 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.147 05:10:25 -- accel/accel.sh@42 -- # jq -r . 00:08:09.147 [2024-11-19 05:10:25.449784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.147 [2024-11-19 05:10:25.449851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666624 ] 00:08:09.147 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.147 [2024-11-19 05:10:25.518502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.147 [2024-11-19 05:10:25.552864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val=0x1 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val=decompress 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val=software 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val=32 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val=32 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.147 05:10:25 -- accel/accel.sh@21 -- # val=2 00:08:09.147 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.147 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.148 05:10:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.148 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.148 05:10:25 -- accel/accel.sh@21 -- # val=Yes 00:08:09.148 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.148 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.148 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.148 05:10:25 -- accel/accel.sh@21 -- # val= 00:08:09.148 05:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # IFS=: 00:08:09.148 05:10:25 -- accel/accel.sh@20 -- # read -r var val 00:08:10.525 05:10:26 -- accel/accel.sh@21 -- # val= 00:08:10.525 05:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.525 05:10:26 -- accel/accel.sh@20 -- # IFS=: 00:08:10.525 05:10:26 -- accel/accel.sh@20 -- # read -r var val 00:08:10.525 05:10:26 -- accel/accel.sh@21 -- # val= 00:08:10.525 05:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.525 05:10:26 -- accel/accel.sh@20 -- # IFS=: 00:08:10.525 05:10:26 -- accel/accel.sh@20 -- # read -r var val 00:08:10.525 05:10:26 -- accel/accel.sh@21 -- # val= 00:08:10.525 05:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.525 05:10:26 -- accel/accel.sh@20 -- # IFS=: 00:08:10.525 05:10:26 -- accel/accel.sh@20 -- # read -r var val 00:08:10.525 05:10:26 -- accel/accel.sh@21 -- # val= 00:08:10.525 05:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.525 05:10:26 -- accel/accel.sh@20 -- # IFS=: 00:08:10.526 05:10:26 -- accel/accel.sh@20 -- # read -r var val 00:08:10.526 05:10:26 -- accel/accel.sh@21 -- # val= 00:08:10.526 05:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.526 05:10:26 -- accel/accel.sh@20 -- # IFS=: 00:08:10.526 05:10:26 -- accel/accel.sh@20 -- # read -r var val 00:08:10.526 05:10:26 -- accel/accel.sh@21 -- # val= 00:08:10.526 05:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.526 05:10:26 -- accel/accel.sh@20 -- # IFS=: 00:08:10.526 05:10:26 -- accel/accel.sh@20 -- # read -r var val 00:08:10.526 05:10:26 -- accel/accel.sh@21 -- # val= 00:08:10.526 05:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.526 05:10:26 -- accel/accel.sh@20 -- # IFS=: 00:08:10.526 05:10:26 -- accel/accel.sh@20 -- # read -r var val 00:08:10.526 05:10:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.526 05:10:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:10.526 05:10:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.526 00:08:10.526 real 0m2.611s 00:08:10.526 user 0m2.368s 00:08:10.526 sys 0m0.253s 00:08:10.526 05:10:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.526 05:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:10.526 ************************************ 00:08:10.526 END TEST accel_decomp_mthread 00:08:10.526 ************************************ 00:08:10.526 05:10:26 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.526 05:10:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:10.526 05:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.526 05:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:10.526 ************************************ 00:08:10.526 START TEST accel_deomp_full_mthread 00:08:10.526 ************************************ 00:08:10.526 05:10:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.526 05:10:26 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.526 05:10:26 -- accel/accel.sh@17 -- # local accel_module 00:08:10.526 05:10:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.526 05:10:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.526 05:10:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.526 05:10:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.526 05:10:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.526 05:10:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.526 05:10:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.526 05:10:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.526 05:10:26 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.526 05:10:26 -- accel/accel.sh@42 -- # jq -r . 00:08:10.526 [2024-11-19 05:10:26.806930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.526 [2024-11-19 05:10:26.807012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666913 ] 00:08:10.526 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.526 [2024-11-19 05:10:26.876615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.526 [2024-11-19 05:10:26.911758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.905 05:10:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:11.905 00:08:11.905 SPDK Configuration: 00:08:11.905 Core mask: 0x1 00:08:11.905 00:08:11.905 Accel Perf Configuration: 00:08:11.905 Workload Type: decompress 00:08:11.905 Transfer size: 111250 bytes 00:08:11.905 Vector count 1 00:08:11.905 Module: software 00:08:11.905 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:11.905 Queue depth: 32 00:08:11.905 Allocate depth: 32 00:08:11.905 # threads/core: 2 00:08:11.905 Run time: 1 seconds 00:08:11.905 Verify: Yes 00:08:11.905 00:08:11.905 Running for 1 seconds... 00:08:11.905 00:08:11.905 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.905 ------------------------------------------------------------------------------------ 00:08:11.905 0,1 2848/s 117 MiB/s 0 0 00:08:11.905 0,0 2816/s 116 MiB/s 0 0 00:08:11.905 ==================================================================================== 00:08:11.905 Total 5664/s 600 MiB/s 0 0' 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.905 05:10:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.905 05:10:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.905 05:10:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.905 05:10:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.905 05:10:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.905 05:10:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.905 05:10:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.905 05:10:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.905 05:10:28 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.905 05:10:28 -- accel/accel.sh@42 -- # jq -r . 00:08:11.905 [2024-11-19 05:10:28.131071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.905 [2024-11-19 05:10:28.131137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667181 ] 00:08:11.905 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.905 [2024-11-19 05:10:28.199711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.905 [2024-11-19 05:10:28.233652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.905 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.905 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.905 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.905 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.905 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.905 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.905 05:10:28 -- accel/accel.sh@21 -- # val=0x1 00:08:11.905 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.905 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.905 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.905 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.905 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.905 05:10:28 -- accel/accel.sh@21 -- # val=decompress 00:08:11.905 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.905 05:10:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:11.905 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val=software 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val=32 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val=32 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val=2 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val=Yes 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.906 05:10:28 -- accel/accel.sh@21 -- # val= 00:08:11.906 05:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.906 05:10:28 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@21 -- # val= 00:08:13.289 05:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # IFS=: 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@21 -- # val= 00:08:13.289 05:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # IFS=: 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@21 -- # val= 00:08:13.289 05:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # IFS=: 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@21 -- # val= 00:08:13.289 05:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # IFS=: 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@21 -- # val= 00:08:13.289 05:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # IFS=: 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@21 -- # val= 00:08:13.289 05:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # IFS=: 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@21 -- # val= 00:08:13.289 05:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # IFS=: 00:08:13.289 05:10:29 -- accel/accel.sh@20 -- # read -r var val 00:08:13.289 05:10:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.289 05:10:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:13.289 05:10:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.289 00:08:13.289 real 0m2.654s 00:08:13.289 user 0m2.412s 00:08:13.289 sys 0m0.252s 00:08:13.289 05:10:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.289 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:13.289 ************************************ 00:08:13.289 END TEST accel_deomp_full_mthread 00:08:13.289 ************************************ 00:08:13.289 05:10:29 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:13.289 05:10:29 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:13.289 05:10:29 -- accel/accel.sh@129 -- # build_accel_config 00:08:13.289 05:10:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:13.289 05:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.289 05:10:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.289 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:13.289 05:10:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.289 05:10:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.289 05:10:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.289 05:10:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.289 05:10:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.289 05:10:29 -- accel/accel.sh@42 -- # jq -r . 00:08:13.289 ************************************ 00:08:13.289 START TEST accel_dif_functional_tests 00:08:13.289 ************************************ 00:08:13.289 05:10:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:13.289 [2024-11-19 05:10:29.524733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.289 [2024-11-19 05:10:29.524786] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667467 ] 00:08:13.289 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.289 [2024-11-19 05:10:29.591647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.289 [2024-11-19 05:10:29.628546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.289 [2024-11-19 05:10:29.628607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.289 [2024-11-19 05:10:29.628610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.289 00:08:13.289 00:08:13.289 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.289 http://cunit.sourceforge.net/ 00:08:13.289 00:08:13.289 00:08:13.289 Suite: accel_dif 00:08:13.289 Test: verify: DIF generated, GUARD check ...passed 00:08:13.289 Test: verify: DIF generated, APPTAG check ...passed 00:08:13.289 Test: verify: DIF generated, REFTAG check ...passed 00:08:13.289 Test: verify: DIF not generated, GUARD check ...[2024-11-19 05:10:29.691621] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:13.289 [2024-11-19 05:10:29.691669] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:13.289 passed 00:08:13.289 Test: verify: DIF not generated, APPTAG check ...[2024-11-19 05:10:29.691715] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:13.289 [2024-11-19 05:10:29.691732] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:13.289 passed 00:08:13.289 Test: verify: DIF not generated, REFTAG check ...[2024-11-19 05:10:29.691752] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:13.289 [2024-11-19 05:10:29.691769] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:13.289 passed 00:08:13.289 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:13.289 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-19 05:10:29.691811] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:13.289 passed 00:08:13.289 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:13.289 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:13.289 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:13.289 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-19 05:10:29.691913] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:13.289 passed 00:08:13.289 Test: generate copy: DIF generated, GUARD check ...passed 00:08:13.289 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:13.289 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:13.289 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:13.289 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:13.289 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:13.289 Test: generate copy: iovecs-len validate ...[2024-11-19 05:10:29.692084] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:13.289 passed 00:08:13.289 Test: generate copy: buffer alignment validate ...passed 00:08:13.289 00:08:13.289 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.289 suites 1 1 n/a 0 0 00:08:13.289 tests 20 20 20 0 0 00:08:13.289 asserts 204 204 204 0 n/a 00:08:13.289 00:08:13.289 Elapsed time = 0.002 seconds 00:08:13.289 00:08:13.289 real 0m0.365s 00:08:13.289 user 0m0.547s 00:08:13.289 sys 0m0.156s 00:08:13.289 05:10:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.289 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:13.289 ************************************ 00:08:13.289 END TEST accel_dif_functional_tests 00:08:13.289 ************************************ 00:08:13.549 00:08:13.549 real 0m55.692s 00:08:13.549 user 1m3.350s 00:08:13.549 sys 0m6.912s 00:08:13.549 05:10:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.549 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:13.549 ************************************ 00:08:13.549 END TEST accel 00:08:13.549 ************************************ 00:08:13.549 05:10:29 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:13.549 05:10:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.549 05:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.549 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:13.549 ************************************ 00:08:13.549 START TEST accel_rpc 00:08:13.549 ************************************ 00:08:13.549 05:10:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:13.549 * Looking for test storage... 00:08:13.549 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:13.549 05:10:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:13.549 05:10:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:13.549 05:10:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:13.809 05:10:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:13.809 05:10:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:13.809 05:10:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:13.809 05:10:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:13.809 05:10:30 -- scripts/common.sh@335 -- # IFS=.-: 00:08:13.809 05:10:30 -- scripts/common.sh@335 -- # read -ra ver1 00:08:13.809 05:10:30 -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.809 05:10:30 -- scripts/common.sh@336 -- # read -ra ver2 00:08:13.809 05:10:30 -- scripts/common.sh@337 -- # local 'op=<' 00:08:13.809 05:10:30 -- scripts/common.sh@339 -- # ver1_l=2 00:08:13.809 05:10:30 -- scripts/common.sh@340 -- # ver2_l=1 00:08:13.809 05:10:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:13.809 05:10:30 -- scripts/common.sh@343 -- # case "$op" in 00:08:13.809 05:10:30 -- scripts/common.sh@344 -- # : 1 00:08:13.809 05:10:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:13.809 05:10:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.809 05:10:30 -- scripts/common.sh@364 -- # decimal 1 00:08:13.809 05:10:30 -- scripts/common.sh@352 -- # local d=1 00:08:13.809 05:10:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.809 05:10:30 -- scripts/common.sh@354 -- # echo 1 00:08:13.809 05:10:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:13.809 05:10:30 -- scripts/common.sh@365 -- # decimal 2 00:08:13.809 05:10:30 -- scripts/common.sh@352 -- # local d=2 00:08:13.809 05:10:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.809 05:10:30 -- scripts/common.sh@354 -- # echo 2 00:08:13.809 05:10:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:13.809 05:10:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:13.809 05:10:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:13.809 05:10:30 -- scripts/common.sh@367 -- # return 0 00:08:13.809 05:10:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.809 05:10:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.809 --rc genhtml_branch_coverage=1 00:08:13.809 --rc genhtml_function_coverage=1 00:08:13.809 --rc genhtml_legend=1 00:08:13.809 --rc geninfo_all_blocks=1 00:08:13.809 --rc geninfo_unexecuted_blocks=1 00:08:13.809 00:08:13.809 ' 00:08:13.809 05:10:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.809 --rc genhtml_branch_coverage=1 00:08:13.809 --rc genhtml_function_coverage=1 00:08:13.809 --rc genhtml_legend=1 00:08:13.809 --rc geninfo_all_blocks=1 00:08:13.809 --rc geninfo_unexecuted_blocks=1 00:08:13.809 00:08:13.809 ' 00:08:13.809 05:10:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.809 --rc genhtml_branch_coverage=1 00:08:13.809 --rc genhtml_function_coverage=1 00:08:13.809 --rc genhtml_legend=1 00:08:13.809 --rc geninfo_all_blocks=1 00:08:13.809 --rc geninfo_unexecuted_blocks=1 00:08:13.809 00:08:13.809 ' 00:08:13.809 05:10:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.809 --rc genhtml_branch_coverage=1 00:08:13.809 --rc genhtml_function_coverage=1 00:08:13.809 --rc genhtml_legend=1 00:08:13.809 --rc geninfo_all_blocks=1 00:08:13.809 --rc geninfo_unexecuted_blocks=1 00:08:13.809 00:08:13.809 ' 00:08:13.809 05:10:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:13.809 05:10:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1667540 00:08:13.809 05:10:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 1667540 00:08:13.809 05:10:30 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:13.809 05:10:30 -- common/autotest_common.sh@829 -- # '[' -z 1667540 ']' 00:08:13.809 05:10:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.809 05:10:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.809 05:10:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.809 05:10:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.809 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.809 [2024-11-19 05:10:30.182462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.809 [2024-11-19 05:10:30.182516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667540 ] 00:08:13.809 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.810 [2024-11-19 05:10:30.253565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.810 [2024-11-19 05:10:30.290557] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:13.810 [2024-11-19 05:10:30.290680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.810 05:10:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.810 05:10:30 -- common/autotest_common.sh@862 -- # return 0 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:13.810 05:10:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.810 05:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.810 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.810 ************************************ 00:08:13.810 START TEST accel_assign_opcode 00:08:13.810 ************************************ 00:08:13.810 05:10:30 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:13.810 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.810 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.810 [2024-11-19 05:10:30.331066] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:13.810 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:13.810 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.810 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.810 [2024-11-19 05:10:30.339080] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:13.810 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.810 05:10:30 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:13.810 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.810 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.069 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.069 05:10:30 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:14.069 05:10:30 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:14.069 05:10:30 -- accel/accel_rpc.sh@42 -- # grep software 00:08:14.069 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.069 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.069 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.069 software 00:08:14.069 00:08:14.069 real 0m0.219s 00:08:14.069 user 0m0.041s 00:08:14.069 sys 0m0.012s 00:08:14.069 05:10:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.069 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.069 ************************************ 00:08:14.069 END TEST accel_assign_opcode 00:08:14.069 ************************************ 00:08:14.069 05:10:30 -- accel/accel_rpc.sh@55 -- # killprocess 1667540 00:08:14.069 05:10:30 -- common/autotest_common.sh@936 -- # '[' -z 1667540 ']' 00:08:14.069 05:10:30 -- common/autotest_common.sh@940 -- # kill -0 1667540 00:08:14.069 05:10:30 -- common/autotest_common.sh@941 -- # uname 00:08:14.069 05:10:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.069 05:10:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1667540 00:08:14.328 05:10:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:14.328 05:10:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:14.328 05:10:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1667540' 00:08:14.328 killing process with pid 1667540 00:08:14.328 05:10:30 -- common/autotest_common.sh@955 -- # kill 1667540 00:08:14.328 05:10:30 -- common/autotest_common.sh@960 -- # wait 1667540 00:08:14.588 00:08:14.588 real 0m1.011s 00:08:14.588 user 0m0.904s 00:08:14.588 sys 0m0.458s 00:08:14.588 05:10:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.588 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.588 ************************************ 00:08:14.588 END TEST accel_rpc 00:08:14.588 ************************************ 00:08:14.588 05:10:30 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:14.588 05:10:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.588 05:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.588 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.588 ************************************ 00:08:14.588 START TEST app_cmdline 00:08:14.588 ************************************ 00:08:14.588 05:10:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:14.588 * Looking for test storage... 00:08:14.588 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:14.588 05:10:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.588 05:10:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.588 05:10:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.868 05:10:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.868 05:10:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.868 05:10:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.868 05:10:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.868 05:10:31 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.868 05:10:31 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.868 05:10:31 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.868 05:10:31 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.868 05:10:31 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.868 05:10:31 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.868 05:10:31 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.868 05:10:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.868 05:10:31 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.868 05:10:31 -- scripts/common.sh@344 -- # : 1 00:08:14.868 05:10:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.868 05:10:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.868 05:10:31 -- scripts/common.sh@364 -- # decimal 1 00:08:14.868 05:10:31 -- scripts/common.sh@352 -- # local d=1 00:08:14.868 05:10:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.868 05:10:31 -- scripts/common.sh@354 -- # echo 1 00:08:14.868 05:10:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.868 05:10:31 -- scripts/common.sh@365 -- # decimal 2 00:08:14.868 05:10:31 -- scripts/common.sh@352 -- # local d=2 00:08:14.868 05:10:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.868 05:10:31 -- scripts/common.sh@354 -- # echo 2 00:08:14.868 05:10:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.868 05:10:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.868 05:10:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.868 05:10:31 -- scripts/common.sh@367 -- # return 0 00:08:14.868 05:10:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.868 05:10:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.868 --rc genhtml_branch_coverage=1 00:08:14.868 --rc genhtml_function_coverage=1 00:08:14.868 --rc genhtml_legend=1 00:08:14.868 --rc geninfo_all_blocks=1 00:08:14.868 --rc geninfo_unexecuted_blocks=1 00:08:14.868 00:08:14.868 ' 00:08:14.868 05:10:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.868 --rc genhtml_branch_coverage=1 00:08:14.868 --rc genhtml_function_coverage=1 00:08:14.868 --rc genhtml_legend=1 00:08:14.868 --rc geninfo_all_blocks=1 00:08:14.868 --rc geninfo_unexecuted_blocks=1 00:08:14.868 00:08:14.868 ' 00:08:14.868 05:10:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.868 --rc genhtml_branch_coverage=1 00:08:14.868 --rc genhtml_function_coverage=1 00:08:14.868 --rc genhtml_legend=1 00:08:14.868 --rc geninfo_all_blocks=1 00:08:14.868 --rc geninfo_unexecuted_blocks=1 00:08:14.868 00:08:14.868 ' 00:08:14.868 05:10:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.868 --rc genhtml_branch_coverage=1 00:08:14.868 --rc genhtml_function_coverage=1 00:08:14.868 --rc genhtml_legend=1 00:08:14.868 --rc geninfo_all_blocks=1 00:08:14.868 --rc geninfo_unexecuted_blocks=1 00:08:14.868 00:08:14.868 ' 00:08:14.868 05:10:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:14.868 05:10:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1667879 00:08:14.868 05:10:31 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:14.868 05:10:31 -- app/cmdline.sh@18 -- # waitforlisten 1667879 00:08:14.868 05:10:31 -- common/autotest_common.sh@829 -- # '[' -z 1667879 ']' 00:08:14.868 05:10:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.868 05:10:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.868 05:10:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.868 05:10:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.868 05:10:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.868 [2024-11-19 05:10:31.238419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.868 [2024-11-19 05:10:31.238470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667879 ] 00:08:14.868 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.868 [2024-11-19 05:10:31.307698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.868 [2024-11-19 05:10:31.343548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.868 [2024-11-19 05:10:31.343670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.521 05:10:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.521 05:10:32 -- common/autotest_common.sh@862 -- # return 0 00:08:15.521 05:10:32 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:15.781 { 00:08:15.781 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:15.781 "fields": { 00:08:15.781 "major": 24, 00:08:15.781 "minor": 1, 00:08:15.781 "patch": 1, 00:08:15.781 "suffix": "-pre", 00:08:15.781 "commit": "c13c99a5e" 00:08:15.781 } 00:08:15.781 } 00:08:15.781 05:10:32 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:15.781 05:10:32 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:15.781 05:10:32 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:15.781 05:10:32 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:15.781 05:10:32 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:15.781 05:10:32 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:15.781 05:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.781 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.781 05:10:32 -- app/cmdline.sh@26 -- # sort 00:08:15.781 05:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.781 05:10:32 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:15.781 05:10:32 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:15.781 05:10:32 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.781 05:10:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:15.781 05:10:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:15.781 05:10:32 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:15.781 05:10:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.781 05:10:32 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:15.781 05:10:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.781 05:10:32 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:15.781 05:10:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.781 05:10:32 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:15.781 05:10:32 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:15.781 05:10:32 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.040 request: 00:08:16.040 { 00:08:16.040 "method": "env_dpdk_get_mem_stats", 00:08:16.040 "req_id": 1 00:08:16.040 } 00:08:16.040 Got JSON-RPC error response 00:08:16.040 response: 00:08:16.040 { 00:08:16.040 "code": -32601, 00:08:16.040 "message": "Method not found" 00:08:16.040 } 00:08:16.040 05:10:32 -- common/autotest_common.sh@653 -- # es=1 00:08:16.040 05:10:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.040 05:10:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.040 05:10:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.040 05:10:32 -- app/cmdline.sh@1 -- # killprocess 1667879 00:08:16.040 05:10:32 -- common/autotest_common.sh@936 -- # '[' -z 1667879 ']' 00:08:16.040 05:10:32 -- common/autotest_common.sh@940 -- # kill -0 1667879 00:08:16.040 05:10:32 -- common/autotest_common.sh@941 -- # uname 00:08:16.040 05:10:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:16.040 05:10:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1667879 00:08:16.040 05:10:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:16.040 05:10:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:16.040 05:10:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1667879' 00:08:16.040 killing process with pid 1667879 00:08:16.040 05:10:32 -- common/autotest_common.sh@955 -- # kill 1667879 00:08:16.040 05:10:32 -- common/autotest_common.sh@960 -- # wait 1667879 00:08:16.300 00:08:16.300 real 0m1.796s 00:08:16.300 user 0m2.072s 00:08:16.300 sys 0m0.520s 00:08:16.300 05:10:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.300 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:08:16.300 ************************************ 00:08:16.300 END TEST app_cmdline 00:08:16.300 ************************************ 00:08:16.300 05:10:32 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:16.300 05:10:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.300 05:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.300 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:08:16.300 ************************************ 00:08:16.300 START TEST version 00:08:16.300 ************************************ 00:08:16.300 05:10:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:16.560 * Looking for test storage... 00:08:16.561 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:16.561 05:10:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:16.561 05:10:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:16.561 05:10:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:16.561 05:10:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:16.561 05:10:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:16.561 05:10:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:16.561 05:10:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:16.561 05:10:33 -- scripts/common.sh@335 -- # IFS=.-: 00:08:16.561 05:10:33 -- scripts/common.sh@335 -- # read -ra ver1 00:08:16.561 05:10:33 -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.561 05:10:33 -- scripts/common.sh@336 -- # read -ra ver2 00:08:16.561 05:10:33 -- scripts/common.sh@337 -- # local 'op=<' 00:08:16.561 05:10:33 -- scripts/common.sh@339 -- # ver1_l=2 00:08:16.561 05:10:33 -- scripts/common.sh@340 -- # ver2_l=1 00:08:16.561 05:10:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:16.561 05:10:33 -- scripts/common.sh@343 -- # case "$op" in 00:08:16.561 05:10:33 -- scripts/common.sh@344 -- # : 1 00:08:16.561 05:10:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:16.561 05:10:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.561 05:10:33 -- scripts/common.sh@364 -- # decimal 1 00:08:16.561 05:10:33 -- scripts/common.sh@352 -- # local d=1 00:08:16.561 05:10:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.561 05:10:33 -- scripts/common.sh@354 -- # echo 1 00:08:16.561 05:10:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:16.561 05:10:33 -- scripts/common.sh@365 -- # decimal 2 00:08:16.561 05:10:33 -- scripts/common.sh@352 -- # local d=2 00:08:16.561 05:10:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.561 05:10:33 -- scripts/common.sh@354 -- # echo 2 00:08:16.561 05:10:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:16.561 05:10:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:16.561 05:10:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:16.561 05:10:33 -- scripts/common.sh@367 -- # return 0 00:08:16.561 05:10:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.561 05:10:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.561 --rc genhtml_branch_coverage=1 00:08:16.561 --rc genhtml_function_coverage=1 00:08:16.561 --rc genhtml_legend=1 00:08:16.561 --rc geninfo_all_blocks=1 00:08:16.561 --rc geninfo_unexecuted_blocks=1 00:08:16.561 00:08:16.561 ' 00:08:16.561 05:10:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.561 --rc genhtml_branch_coverage=1 00:08:16.561 --rc genhtml_function_coverage=1 00:08:16.561 --rc genhtml_legend=1 00:08:16.561 --rc geninfo_all_blocks=1 00:08:16.561 --rc geninfo_unexecuted_blocks=1 00:08:16.561 00:08:16.561 ' 00:08:16.561 05:10:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.561 --rc genhtml_branch_coverage=1 00:08:16.561 --rc genhtml_function_coverage=1 00:08:16.561 --rc genhtml_legend=1 00:08:16.561 --rc geninfo_all_blocks=1 00:08:16.561 --rc geninfo_unexecuted_blocks=1 00:08:16.561 00:08:16.561 ' 00:08:16.561 05:10:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.561 --rc genhtml_branch_coverage=1 00:08:16.561 --rc genhtml_function_coverage=1 00:08:16.561 --rc genhtml_legend=1 00:08:16.561 --rc geninfo_all_blocks=1 00:08:16.561 --rc geninfo_unexecuted_blocks=1 00:08:16.561 00:08:16.561 ' 00:08:16.561 05:10:33 -- app/version.sh@17 -- # get_header_version major 00:08:16.561 05:10:33 -- app/version.sh@14 -- # tr -d '"' 00:08:16.561 05:10:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:16.561 05:10:33 -- app/version.sh@14 -- # cut -f2 00:08:16.561 05:10:33 -- app/version.sh@17 -- # major=24 00:08:16.561 05:10:33 -- app/version.sh@18 -- # get_header_version minor 00:08:16.561 05:10:33 -- app/version.sh@14 -- # tr -d '"' 00:08:16.561 05:10:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:16.561 05:10:33 -- app/version.sh@14 -- # cut -f2 00:08:16.561 05:10:33 -- app/version.sh@18 -- # minor=1 00:08:16.561 05:10:33 -- app/version.sh@19 -- # get_header_version patch 00:08:16.561 05:10:33 -- app/version.sh@14 -- # tr -d '"' 00:08:16.561 05:10:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:16.561 05:10:33 -- app/version.sh@14 -- # cut -f2 00:08:16.561 05:10:33 -- app/version.sh@19 -- # patch=1 00:08:16.561 05:10:33 -- app/version.sh@20 -- # get_header_version suffix 00:08:16.561 05:10:33 -- app/version.sh@14 -- # tr -d '"' 00:08:16.561 05:10:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:16.561 05:10:33 -- app/version.sh@14 -- # cut -f2 00:08:16.561 05:10:33 -- app/version.sh@20 -- # suffix=-pre 00:08:16.561 05:10:33 -- app/version.sh@22 -- # version=24.1 00:08:16.561 05:10:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:16.561 05:10:33 -- app/version.sh@25 -- # version=24.1.1 00:08:16.561 05:10:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:16.561 05:10:33 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:16.561 05:10:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:16.561 05:10:33 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:16.561 05:10:33 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:16.561 00:08:16.561 real 0m0.265s 00:08:16.561 user 0m0.163s 00:08:16.561 sys 0m0.146s 00:08:16.561 05:10:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.561 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.561 ************************************ 00:08:16.561 END TEST version 00:08:16.561 ************************************ 00:08:16.820 05:10:33 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:16.820 05:10:33 -- spdk/autotest.sh@191 -- # uname -s 00:08:16.820 05:10:33 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:16.820 05:10:33 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:16.820 05:10:33 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:16.820 05:10:33 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:16.820 05:10:33 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:16.820 05:10:33 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:16.820 05:10:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.820 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 05:10:33 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:16.820 05:10:33 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:16.820 05:10:33 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:16.820 05:10:33 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:16.820 05:10:33 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:08:16.820 05:10:33 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:16.820 05:10:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:16.820 05:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.820 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 ************************************ 00:08:16.820 START TEST nvmf_rdma 00:08:16.820 ************************************ 00:08:16.820 05:10:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:16.820 * Looking for test storage... 00:08:16.820 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:16.820 05:10:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:16.821 05:10:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:16.821 05:10:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:16.821 05:10:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:16.821 05:10:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:16.821 05:10:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:16.821 05:10:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:16.821 05:10:33 -- scripts/common.sh@335 -- # IFS=.-: 00:08:16.821 05:10:33 -- scripts/common.sh@335 -- # read -ra ver1 00:08:16.821 05:10:33 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.080 05:10:33 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.080 05:10:33 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.080 05:10:33 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.080 05:10:33 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.080 05:10:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.080 05:10:33 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.080 05:10:33 -- scripts/common.sh@344 -- # : 1 00:08:17.081 05:10:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.081 05:10:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.081 05:10:33 -- scripts/common.sh@364 -- # decimal 1 00:08:17.081 05:10:33 -- scripts/common.sh@352 -- # local d=1 00:08:17.081 05:10:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.081 05:10:33 -- scripts/common.sh@354 -- # echo 1 00:08:17.081 05:10:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.081 05:10:33 -- scripts/common.sh@365 -- # decimal 2 00:08:17.081 05:10:33 -- scripts/common.sh@352 -- # local d=2 00:08:17.081 05:10:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.081 05:10:33 -- scripts/common.sh@354 -- # echo 2 00:08:17.081 05:10:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.081 05:10:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.081 05:10:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.081 05:10:33 -- scripts/common.sh@367 -- # return 0 00:08:17.081 05:10:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.081 --rc genhtml_branch_coverage=1 00:08:17.081 --rc genhtml_function_coverage=1 00:08:17.081 --rc genhtml_legend=1 00:08:17.081 --rc geninfo_all_blocks=1 00:08:17.081 --rc geninfo_unexecuted_blocks=1 00:08:17.081 00:08:17.081 ' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.081 --rc genhtml_branch_coverage=1 00:08:17.081 --rc genhtml_function_coverage=1 00:08:17.081 --rc genhtml_legend=1 00:08:17.081 --rc geninfo_all_blocks=1 00:08:17.081 --rc geninfo_unexecuted_blocks=1 00:08:17.081 00:08:17.081 ' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.081 --rc genhtml_branch_coverage=1 00:08:17.081 --rc genhtml_function_coverage=1 00:08:17.081 --rc genhtml_legend=1 00:08:17.081 --rc geninfo_all_blocks=1 00:08:17.081 --rc geninfo_unexecuted_blocks=1 00:08:17.081 00:08:17.081 ' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.081 --rc genhtml_branch_coverage=1 00:08:17.081 --rc genhtml_function_coverage=1 00:08:17.081 --rc genhtml_legend=1 00:08:17.081 --rc geninfo_all_blocks=1 00:08:17.081 --rc geninfo_unexecuted_blocks=1 00:08:17.081 00:08:17.081 ' 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.081 05:10:33 -- nvmf/common.sh@7 -- # uname -s 00:08:17.081 05:10:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.081 05:10:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.081 05:10:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.081 05:10:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.081 05:10:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.081 05:10:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.081 05:10:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.081 05:10:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.081 05:10:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.081 05:10:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.081 05:10:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:17.081 05:10:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:17.081 05:10:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.081 05:10:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.081 05:10:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.081 05:10:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:17.081 05:10:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.081 05:10:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.081 05:10:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.081 05:10:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.081 05:10:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.081 05:10:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.081 05:10:33 -- paths/export.sh@5 -- # export PATH 00:08:17.081 05:10:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.081 05:10:33 -- nvmf/common.sh@46 -- # : 0 00:08:17.081 05:10:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:17.081 05:10:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:17.081 05:10:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:17.081 05:10:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.081 05:10:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.081 05:10:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:17.081 05:10:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:17.081 05:10:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:17.081 05:10:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.081 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:17.081 05:10:33 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:17.081 05:10:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.081 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.081 ************************************ 00:08:17.081 START TEST nvmf_example 00:08:17.081 ************************************ 00:08:17.081 05:10:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:17.081 * Looking for test storage... 00:08:17.081 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.081 05:10:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.081 05:10:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.081 05:10:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.081 05:10:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.081 05:10:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.081 05:10:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.081 05:10:33 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.081 05:10:33 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.081 05:10:33 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.081 05:10:33 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.081 05:10:33 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.081 05:10:33 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.081 05:10:33 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.081 05:10:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.081 05:10:33 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.081 05:10:33 -- scripts/common.sh@344 -- # : 1 00:08:17.081 05:10:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.081 05:10:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.081 05:10:33 -- scripts/common.sh@364 -- # decimal 1 00:08:17.081 05:10:33 -- scripts/common.sh@352 -- # local d=1 00:08:17.081 05:10:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.081 05:10:33 -- scripts/common.sh@354 -- # echo 1 00:08:17.081 05:10:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.081 05:10:33 -- scripts/common.sh@365 -- # decimal 2 00:08:17.081 05:10:33 -- scripts/common.sh@352 -- # local d=2 00:08:17.081 05:10:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.081 05:10:33 -- scripts/common.sh@354 -- # echo 2 00:08:17.081 05:10:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.081 05:10:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.081 05:10:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.081 05:10:33 -- scripts/common.sh@367 -- # return 0 00:08:17.081 05:10:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.081 --rc genhtml_branch_coverage=1 00:08:17.081 --rc genhtml_function_coverage=1 00:08:17.081 --rc genhtml_legend=1 00:08:17.081 --rc geninfo_all_blocks=1 00:08:17.081 --rc geninfo_unexecuted_blocks=1 00:08:17.081 00:08:17.081 ' 00:08:17.081 05:10:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.081 --rc genhtml_branch_coverage=1 00:08:17.082 --rc genhtml_function_coverage=1 00:08:17.082 --rc genhtml_legend=1 00:08:17.082 --rc geninfo_all_blocks=1 00:08:17.082 --rc geninfo_unexecuted_blocks=1 00:08:17.082 00:08:17.082 ' 00:08:17.082 05:10:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.082 --rc genhtml_branch_coverage=1 00:08:17.082 --rc genhtml_function_coverage=1 00:08:17.082 --rc genhtml_legend=1 00:08:17.082 --rc geninfo_all_blocks=1 00:08:17.082 --rc geninfo_unexecuted_blocks=1 00:08:17.082 00:08:17.082 ' 00:08:17.082 05:10:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.082 --rc genhtml_branch_coverage=1 00:08:17.082 --rc genhtml_function_coverage=1 00:08:17.082 --rc genhtml_legend=1 00:08:17.082 --rc geninfo_all_blocks=1 00:08:17.082 --rc geninfo_unexecuted_blocks=1 00:08:17.082 00:08:17.082 ' 00:08:17.082 05:10:33 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.082 05:10:33 -- nvmf/common.sh@7 -- # uname -s 00:08:17.082 05:10:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.082 05:10:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.082 05:10:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.082 05:10:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.082 05:10:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.082 05:10:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.082 05:10:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.082 05:10:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.082 05:10:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.082 05:10:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.082 05:10:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:17.082 05:10:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:17.082 05:10:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.082 05:10:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.082 05:10:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.082 05:10:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:17.341 05:10:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.341 05:10:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.341 05:10:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.341 05:10:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.341 05:10:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.342 05:10:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.342 05:10:33 -- paths/export.sh@5 -- # export PATH 00:08:17.342 05:10:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.342 05:10:33 -- nvmf/common.sh@46 -- # : 0 00:08:17.342 05:10:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:17.342 05:10:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:17.342 05:10:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:17.342 05:10:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.342 05:10:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.342 05:10:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:17.342 05:10:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:17.342 05:10:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:17.342 05:10:33 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:17.342 05:10:33 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:17.342 05:10:33 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:17.342 05:10:33 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:17.342 05:10:33 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:17.342 05:10:33 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:17.342 05:10:33 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:17.342 05:10:33 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:17.342 05:10:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.342 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.342 05:10:33 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:17.342 05:10:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:17.342 05:10:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.342 05:10:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:17.342 05:10:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:17.342 05:10:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:17.342 05:10:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.342 05:10:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.342 05:10:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.342 05:10:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:17.342 05:10:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:17.342 05:10:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:17.342 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:23.914 05:10:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:23.914 05:10:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:23.914 05:10:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:23.914 05:10:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:23.914 05:10:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:23.914 05:10:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:23.914 05:10:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:23.914 05:10:40 -- nvmf/common.sh@294 -- # net_devs=() 00:08:23.914 05:10:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:23.914 05:10:40 -- nvmf/common.sh@295 -- # e810=() 00:08:23.914 05:10:40 -- nvmf/common.sh@295 -- # local -ga e810 00:08:23.914 05:10:40 -- nvmf/common.sh@296 -- # x722=() 00:08:23.914 05:10:40 -- nvmf/common.sh@296 -- # local -ga x722 00:08:23.914 05:10:40 -- nvmf/common.sh@297 -- # mlx=() 00:08:23.914 05:10:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:23.914 05:10:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.914 05:10:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.914 05:10:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.914 05:10:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.914 05:10:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.914 05:10:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.914 05:10:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.915 05:10:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.915 05:10:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.915 05:10:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.915 05:10:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.915 05:10:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:23.915 05:10:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:23.915 05:10:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:23.915 05:10:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:23.915 05:10:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:23.915 05:10:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:23.915 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:23.915 05:10:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.915 05:10:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:23.915 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:23.915 05:10:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.915 05:10:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:23.915 05:10:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.915 05:10:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:23.915 05:10:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.915 05:10:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:23.915 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.915 05:10:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.915 05:10:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:23.915 05:10:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.915 05:10:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:23.915 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:23.915 05:10:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.915 05:10:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:23.915 05:10:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:23.915 05:10:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:23.915 05:10:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:23.915 05:10:40 -- nvmf/common.sh@57 -- # uname 00:08:23.915 05:10:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:23.915 05:10:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:23.915 05:10:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:23.915 05:10:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:23.915 05:10:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:23.915 05:10:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:23.915 05:10:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:23.915 05:10:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:23.915 05:10:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:23.915 05:10:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:23.915 05:10:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:23.915 05:10:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.915 05:10:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:23.915 05:10:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:23.915 05:10:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.915 05:10:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:23.915 05:10:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@104 -- # continue 2 00:08:23.915 05:10:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:23.915 05:10:40 -- nvmf/common.sh@104 -- # continue 2 00:08:23.915 05:10:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:23.915 05:10:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:23.915 05:10:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:23.915 05:10:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:23.915 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.915 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:23.915 altname enp217s0f0np0 00:08:23.915 altname ens818f0np0 00:08:23.915 inet 192.168.100.8/24 scope global mlx_0_0 00:08:23.915 valid_lft forever preferred_lft forever 00:08:23.915 05:10:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:23.915 05:10:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:23.915 05:10:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:23.915 05:10:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:23.915 05:10:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:23.915 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.915 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:23.915 altname enp217s0f1np1 00:08:23.915 altname ens818f1np1 00:08:23.915 inet 192.168.100.9/24 scope global mlx_0_1 00:08:23.915 valid_lft forever preferred_lft forever 00:08:23.915 05:10:40 -- nvmf/common.sh@410 -- # return 0 00:08:23.915 05:10:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:23.915 05:10:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:23.915 05:10:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:23.915 05:10:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:23.915 05:10:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.915 05:10:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:23.915 05:10:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:23.915 05:10:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.915 05:10:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:23.915 05:10:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@104 -- # continue 2 00:08:23.915 05:10:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.915 05:10:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.915 05:10:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:23.915 05:10:40 -- nvmf/common.sh@104 -- # continue 2 00:08:23.915 05:10:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:23.915 05:10:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:23.915 05:10:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:23.916 05:10:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:23.916 05:10:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:23.916 05:10:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:23.916 05:10:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:23.916 05:10:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:23.916 05:10:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:23.916 05:10:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:23.916 05:10:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:23.916 192.168.100.9' 00:08:23.916 05:10:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:23.916 192.168.100.9' 00:08:23.916 05:10:40 -- nvmf/common.sh@445 -- # head -n 1 00:08:23.916 05:10:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:23.916 05:10:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:23.916 192.168.100.9' 00:08:23.916 05:10:40 -- nvmf/common.sh@446 -- # tail -n +2 00:08:23.916 05:10:40 -- nvmf/common.sh@446 -- # head -n 1 00:08:23.916 05:10:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:23.916 05:10:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:23.916 05:10:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:23.916 05:10:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:23.916 05:10:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:23.916 05:10:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:23.916 05:10:40 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:23.916 05:10:40 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:23.916 05:10:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.916 05:10:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.916 05:10:40 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:23.916 05:10:40 -- target/nvmf_example.sh@34 -- # nvmfpid=1671731 00:08:23.916 05:10:40 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:23.916 05:10:40 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.916 05:10:40 -- target/nvmf_example.sh@36 -- # waitforlisten 1671731 00:08:23.916 05:10:40 -- common/autotest_common.sh@829 -- # '[' -z 1671731 ']' 00:08:23.916 05:10:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.916 05:10:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.916 05:10:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.916 05:10:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.916 05:10:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.916 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.854 05:10:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.854 05:10:41 -- common/autotest_common.sh@862 -- # return 0 00:08:24.854 05:10:41 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:24.854 05:10:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.854 05:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.854 05:10:41 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:24.854 05:10:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.854 05:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:25.113 05:10:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.113 05:10:41 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:25.113 05:10:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.113 05:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:25.113 05:10:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.113 05:10:41 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:25.113 05:10:41 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.113 05:10:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.113 05:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:25.113 05:10:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.113 05:10:41 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:25.113 05:10:41 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.113 05:10:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.113 05:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:25.113 05:10:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.113 05:10:41 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:25.113 05:10:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.113 05:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:25.113 05:10:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.113 05:10:41 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:25.113 05:10:41 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:25.113 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.333 Initializing NVMe Controllers 00:08:37.333 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:37.333 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:37.333 Initialization complete. Launching workers. 00:08:37.333 ======================================================== 00:08:37.333 Latency(us) 00:08:37.333 Device Information : IOPS MiB/s Average min max 00:08:37.333 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26363.92 102.98 2427.17 593.40 15034.10 00:08:37.333 ======================================================== 00:08:37.333 Total : 26363.92 102.98 2427.17 593.40 15034.10 00:08:37.333 00:08:37.333 05:10:52 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:37.333 05:10:52 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:37.333 05:10:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:37.333 05:10:52 -- nvmf/common.sh@116 -- # sync 00:08:37.333 05:10:52 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:37.333 05:10:52 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:37.333 05:10:52 -- nvmf/common.sh@119 -- # set +e 00:08:37.333 05:10:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:37.333 05:10:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:37.333 rmmod nvme_rdma 00:08:37.333 rmmod nvme_fabrics 00:08:37.333 05:10:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:37.333 05:10:52 -- nvmf/common.sh@123 -- # set -e 00:08:37.333 05:10:52 -- nvmf/common.sh@124 -- # return 0 00:08:37.333 05:10:52 -- nvmf/common.sh@477 -- # '[' -n 1671731 ']' 00:08:37.333 05:10:52 -- nvmf/common.sh@478 -- # killprocess 1671731 00:08:37.333 05:10:52 -- common/autotest_common.sh@936 -- # '[' -z 1671731 ']' 00:08:37.333 05:10:52 -- common/autotest_common.sh@940 -- # kill -0 1671731 00:08:37.333 05:10:52 -- common/autotest_common.sh@941 -- # uname 00:08:37.333 05:10:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.333 05:10:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1671731 00:08:37.333 05:10:52 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:37.333 05:10:52 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:37.333 05:10:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1671731' 00:08:37.333 killing process with pid 1671731 00:08:37.333 05:10:52 -- common/autotest_common.sh@955 -- # kill 1671731 00:08:37.333 05:10:52 -- common/autotest_common.sh@960 -- # wait 1671731 00:08:37.333 nvmf threads initialize successfully 00:08:37.333 bdev subsystem init successfully 00:08:37.333 created a nvmf target service 00:08:37.333 create targets's poll groups done 00:08:37.333 all subsystems of target started 00:08:37.333 nvmf target is running 00:08:37.333 all subsystems of target stopped 00:08:37.333 destroy targets's poll groups done 00:08:37.333 destroyed the nvmf target service 00:08:37.333 bdev subsystem finish successfully 00:08:37.333 nvmf threads destroy successfully 00:08:37.333 05:10:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:37.333 05:10:53 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:37.334 05:10:53 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:37.334 05:10:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.334 05:10:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.334 00:08:37.334 real 0m19.718s 00:08:37.334 user 0m52.366s 00:08:37.334 sys 0m5.612s 00:08:37.334 05:10:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.334 05:10:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.334 ************************************ 00:08:37.334 END TEST nvmf_example 00:08:37.334 ************************************ 00:08:37.334 05:10:53 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:37.334 05:10:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.334 05:10:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.334 05:10:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.334 ************************************ 00:08:37.334 START TEST nvmf_filesystem 00:08:37.334 ************************************ 00:08:37.334 05:10:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:37.334 * Looking for test storage... 00:08:37.334 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.334 05:10:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.334 05:10:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.334 05:10:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.334 05:10:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.334 05:10:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.334 05:10:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.334 05:10:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.334 05:10:53 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.334 05:10:53 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.334 05:10:53 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.334 05:10:53 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.334 05:10:53 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.334 05:10:53 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.334 05:10:53 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.334 05:10:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.334 05:10:53 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.334 05:10:53 -- scripts/common.sh@344 -- # : 1 00:08:37.334 05:10:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.334 05:10:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.334 05:10:53 -- scripts/common.sh@364 -- # decimal 1 00:08:37.334 05:10:53 -- scripts/common.sh@352 -- # local d=1 00:08:37.334 05:10:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.334 05:10:53 -- scripts/common.sh@354 -- # echo 1 00:08:37.334 05:10:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.334 05:10:53 -- scripts/common.sh@365 -- # decimal 2 00:08:37.334 05:10:53 -- scripts/common.sh@352 -- # local d=2 00:08:37.334 05:10:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.334 05:10:53 -- scripts/common.sh@354 -- # echo 2 00:08:37.334 05:10:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.334 05:10:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.334 05:10:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.334 05:10:53 -- scripts/common.sh@367 -- # return 0 00:08:37.334 05:10:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.334 05:10:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.334 --rc genhtml_branch_coverage=1 00:08:37.334 --rc genhtml_function_coverage=1 00:08:37.334 --rc genhtml_legend=1 00:08:37.334 --rc geninfo_all_blocks=1 00:08:37.334 --rc geninfo_unexecuted_blocks=1 00:08:37.334 00:08:37.334 ' 00:08:37.334 05:10:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.334 --rc genhtml_branch_coverage=1 00:08:37.334 --rc genhtml_function_coverage=1 00:08:37.334 --rc genhtml_legend=1 00:08:37.334 --rc geninfo_all_blocks=1 00:08:37.334 --rc geninfo_unexecuted_blocks=1 00:08:37.334 00:08:37.334 ' 00:08:37.334 05:10:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.334 --rc genhtml_branch_coverage=1 00:08:37.334 --rc genhtml_function_coverage=1 00:08:37.334 --rc genhtml_legend=1 00:08:37.334 --rc geninfo_all_blocks=1 00:08:37.334 --rc geninfo_unexecuted_blocks=1 00:08:37.334 00:08:37.334 ' 00:08:37.334 05:10:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.334 --rc genhtml_branch_coverage=1 00:08:37.334 --rc genhtml_function_coverage=1 00:08:37.334 --rc genhtml_legend=1 00:08:37.334 --rc geninfo_all_blocks=1 00:08:37.334 --rc geninfo_unexecuted_blocks=1 00:08:37.334 00:08:37.334 ' 00:08:37.334 05:10:53 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:37.334 05:10:53 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:37.334 05:10:53 -- common/autotest_common.sh@34 -- # set -e 00:08:37.334 05:10:53 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:37.334 05:10:53 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:37.334 05:10:53 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:37.334 05:10:53 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:37.334 05:10:53 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:37.334 05:10:53 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:37.334 05:10:53 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:37.334 05:10:53 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:37.334 05:10:53 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:37.334 05:10:53 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:37.334 05:10:53 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:37.334 05:10:53 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:37.334 05:10:53 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:37.334 05:10:53 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:37.334 05:10:53 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:37.334 05:10:53 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:37.334 05:10:53 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:37.334 05:10:53 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:37.334 05:10:53 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:37.334 05:10:53 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:37.334 05:10:53 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:37.334 05:10:53 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:37.334 05:10:53 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:37.334 05:10:53 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:37.334 05:10:53 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:37.334 05:10:53 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:37.334 05:10:53 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:37.334 05:10:53 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:37.334 05:10:53 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:37.334 05:10:53 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:37.334 05:10:53 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:37.334 05:10:53 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:37.334 05:10:53 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:37.334 05:10:53 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:37.334 05:10:53 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:37.334 05:10:53 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:37.334 05:10:53 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:37.334 05:10:53 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:37.334 05:10:53 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:37.334 05:10:53 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:37.334 05:10:53 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:37.334 05:10:53 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:37.334 05:10:53 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:37.334 05:10:53 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:37.334 05:10:53 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:37.334 05:10:53 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:37.334 05:10:53 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:37.334 05:10:53 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:37.334 05:10:53 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:37.334 05:10:53 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:37.334 05:10:53 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:37.334 05:10:53 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:37.334 05:10:53 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:37.334 05:10:53 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:37.334 05:10:53 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:37.334 05:10:53 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:37.334 05:10:53 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:37.334 05:10:53 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:37.335 05:10:53 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:37.335 05:10:53 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:37.335 05:10:53 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:37.335 05:10:53 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:37.335 05:10:53 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:37.335 05:10:53 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:37.335 05:10:53 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:37.335 05:10:53 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:37.335 05:10:53 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:37.335 05:10:53 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:37.335 05:10:53 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:37.335 05:10:53 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:37.335 05:10:53 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:37.335 05:10:53 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:37.335 05:10:53 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:37.335 05:10:53 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:37.335 05:10:53 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:37.335 05:10:53 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:37.335 05:10:53 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:37.335 05:10:53 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:37.335 05:10:53 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:37.335 05:10:53 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:37.335 05:10:53 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:37.335 05:10:53 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:37.335 05:10:53 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:37.335 05:10:53 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:37.335 05:10:53 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:37.335 05:10:53 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:37.335 05:10:53 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:37.335 05:10:53 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:37.335 05:10:53 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:37.335 05:10:53 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:37.335 05:10:53 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:37.335 05:10:53 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:37.335 05:10:53 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:37.335 05:10:53 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:37.335 05:10:53 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:37.335 05:10:53 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:37.335 05:10:53 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:37.335 05:10:53 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:37.335 05:10:53 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:37.335 #define SPDK_CONFIG_H 00:08:37.335 #define SPDK_CONFIG_APPS 1 00:08:37.335 #define SPDK_CONFIG_ARCH native 00:08:37.335 #undef SPDK_CONFIG_ASAN 00:08:37.335 #undef SPDK_CONFIG_AVAHI 00:08:37.335 #undef SPDK_CONFIG_CET 00:08:37.335 #define SPDK_CONFIG_COVERAGE 1 00:08:37.335 #define SPDK_CONFIG_CROSS_PREFIX 00:08:37.335 #undef SPDK_CONFIG_CRYPTO 00:08:37.335 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:37.335 #undef SPDK_CONFIG_CUSTOMOCF 00:08:37.335 #undef SPDK_CONFIG_DAOS 00:08:37.335 #define SPDK_CONFIG_DAOS_DIR 00:08:37.335 #define SPDK_CONFIG_DEBUG 1 00:08:37.335 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:37.335 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:37.335 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:37.335 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:37.335 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:37.335 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:37.335 #define SPDK_CONFIG_EXAMPLES 1 00:08:37.335 #undef SPDK_CONFIG_FC 00:08:37.335 #define SPDK_CONFIG_FC_PATH 00:08:37.335 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:37.335 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:37.335 #undef SPDK_CONFIG_FUSE 00:08:37.335 #undef SPDK_CONFIG_FUZZER 00:08:37.335 #define SPDK_CONFIG_FUZZER_LIB 00:08:37.335 #undef SPDK_CONFIG_GOLANG 00:08:37.335 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:37.335 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:37.335 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:37.335 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:37.335 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:37.335 #define SPDK_CONFIG_IDXD 1 00:08:37.335 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:37.335 #undef SPDK_CONFIG_IPSEC_MB 00:08:37.335 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:37.335 #define SPDK_CONFIG_ISAL 1 00:08:37.335 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:37.335 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:37.335 #define SPDK_CONFIG_LIBDIR 00:08:37.335 #undef SPDK_CONFIG_LTO 00:08:37.335 #define SPDK_CONFIG_MAX_LCORES 00:08:37.335 #define SPDK_CONFIG_NVME_CUSE 1 00:08:37.335 #undef SPDK_CONFIG_OCF 00:08:37.335 #define SPDK_CONFIG_OCF_PATH 00:08:37.335 #define SPDK_CONFIG_OPENSSL_PATH 00:08:37.335 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:37.335 #undef SPDK_CONFIG_PGO_USE 00:08:37.335 #define SPDK_CONFIG_PREFIX /usr/local 00:08:37.335 #undef SPDK_CONFIG_RAID5F 00:08:37.335 #undef SPDK_CONFIG_RBD 00:08:37.335 #define SPDK_CONFIG_RDMA 1 00:08:37.335 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:37.335 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:37.335 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:37.335 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:37.335 #define SPDK_CONFIG_SHARED 1 00:08:37.335 #undef SPDK_CONFIG_SMA 00:08:37.335 #define SPDK_CONFIG_TESTS 1 00:08:37.335 #undef SPDK_CONFIG_TSAN 00:08:37.335 #define SPDK_CONFIG_UBLK 1 00:08:37.335 #define SPDK_CONFIG_UBSAN 1 00:08:37.335 #undef SPDK_CONFIG_UNIT_TESTS 00:08:37.335 #undef SPDK_CONFIG_URING 00:08:37.335 #define SPDK_CONFIG_URING_PATH 00:08:37.335 #undef SPDK_CONFIG_URING_ZNS 00:08:37.335 #undef SPDK_CONFIG_USDT 00:08:37.335 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:37.335 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:37.335 #undef SPDK_CONFIG_VFIO_USER 00:08:37.335 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:37.335 #define SPDK_CONFIG_VHOST 1 00:08:37.335 #define SPDK_CONFIG_VIRTIO 1 00:08:37.335 #undef SPDK_CONFIG_VTUNE 00:08:37.335 #define SPDK_CONFIG_VTUNE_DIR 00:08:37.335 #define SPDK_CONFIG_WERROR 1 00:08:37.335 #define SPDK_CONFIG_WPDK_DIR 00:08:37.335 #undef SPDK_CONFIG_XNVME 00:08:37.335 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:37.335 05:10:53 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:37.335 05:10:53 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:37.335 05:10:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.335 05:10:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.335 05:10:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.335 05:10:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.335 05:10:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.335 05:10:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.335 05:10:53 -- paths/export.sh@5 -- # export PATH 00:08:37.336 05:10:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.336 05:10:53 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:37.336 05:10:53 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:37.336 05:10:53 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:37.336 05:10:53 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:37.336 05:10:53 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:37.336 05:10:53 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:37.336 05:10:53 -- pm/common@16 -- # TEST_TAG=N/A 00:08:37.336 05:10:53 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:37.336 05:10:53 -- common/autotest_common.sh@52 -- # : 1 00:08:37.336 05:10:53 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:37.336 05:10:53 -- common/autotest_common.sh@56 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:37.336 05:10:53 -- common/autotest_common.sh@58 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:37.336 05:10:53 -- common/autotest_common.sh@60 -- # : 1 00:08:37.336 05:10:53 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:37.336 05:10:53 -- common/autotest_common.sh@62 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:37.336 05:10:53 -- common/autotest_common.sh@64 -- # : 00:08:37.336 05:10:53 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:37.336 05:10:53 -- common/autotest_common.sh@66 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:37.336 05:10:53 -- common/autotest_common.sh@68 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:37.336 05:10:53 -- common/autotest_common.sh@70 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:37.336 05:10:53 -- common/autotest_common.sh@72 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:37.336 05:10:53 -- common/autotest_common.sh@74 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:37.336 05:10:53 -- common/autotest_common.sh@76 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:37.336 05:10:53 -- common/autotest_common.sh@78 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:37.336 05:10:53 -- common/autotest_common.sh@80 -- # : 1 00:08:37.336 05:10:53 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:37.336 05:10:53 -- common/autotest_common.sh@82 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:37.336 05:10:53 -- common/autotest_common.sh@84 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:37.336 05:10:53 -- common/autotest_common.sh@86 -- # : 1 00:08:37.336 05:10:53 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:37.336 05:10:53 -- common/autotest_common.sh@88 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:37.336 05:10:53 -- common/autotest_common.sh@90 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:37.336 05:10:53 -- common/autotest_common.sh@92 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:37.336 05:10:53 -- common/autotest_common.sh@94 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:37.336 05:10:53 -- common/autotest_common.sh@96 -- # : rdma 00:08:37.336 05:10:53 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:37.336 05:10:53 -- common/autotest_common.sh@98 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:37.336 05:10:53 -- common/autotest_common.sh@100 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:37.336 05:10:53 -- common/autotest_common.sh@102 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:37.336 05:10:53 -- common/autotest_common.sh@104 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:37.336 05:10:53 -- common/autotest_common.sh@106 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:37.336 05:10:53 -- common/autotest_common.sh@108 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:37.336 05:10:53 -- common/autotest_common.sh@110 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:37.336 05:10:53 -- common/autotest_common.sh@112 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:37.336 05:10:53 -- common/autotest_common.sh@114 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:37.336 05:10:53 -- common/autotest_common.sh@116 -- # : 1 00:08:37.336 05:10:53 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:37.336 05:10:53 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:37.336 05:10:53 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:37.336 05:10:53 -- common/autotest_common.sh@120 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:37.336 05:10:53 -- common/autotest_common.sh@122 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:37.336 05:10:53 -- common/autotest_common.sh@124 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:37.336 05:10:53 -- common/autotest_common.sh@126 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:37.336 05:10:53 -- common/autotest_common.sh@128 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:37.336 05:10:53 -- common/autotest_common.sh@130 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:37.336 05:10:53 -- common/autotest_common.sh@132 -- # : v23.11 00:08:37.336 05:10:53 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:37.336 05:10:53 -- common/autotest_common.sh@134 -- # : true 00:08:37.336 05:10:53 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:37.336 05:10:53 -- common/autotest_common.sh@136 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:37.336 05:10:53 -- common/autotest_common.sh@138 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:37.336 05:10:53 -- common/autotest_common.sh@140 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:37.336 05:10:53 -- common/autotest_common.sh@142 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:37.336 05:10:53 -- common/autotest_common.sh@144 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:37.336 05:10:53 -- common/autotest_common.sh@146 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:37.336 05:10:53 -- common/autotest_common.sh@148 -- # : mlx5 00:08:37.336 05:10:53 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:37.336 05:10:53 -- common/autotest_common.sh@150 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:37.336 05:10:53 -- common/autotest_common.sh@152 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:37.336 05:10:53 -- common/autotest_common.sh@154 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:37.336 05:10:53 -- common/autotest_common.sh@156 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:37.336 05:10:53 -- common/autotest_common.sh@158 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:37.336 05:10:53 -- common/autotest_common.sh@160 -- # : 0 00:08:37.336 05:10:53 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:37.336 05:10:53 -- common/autotest_common.sh@163 -- # : 00:08:37.337 05:10:53 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:37.337 05:10:53 -- common/autotest_common.sh@165 -- # : 0 00:08:37.337 05:10:53 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:37.337 05:10:53 -- common/autotest_common.sh@167 -- # : 0 00:08:37.337 05:10:53 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:37.337 05:10:53 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:37.337 05:10:53 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:37.337 05:10:53 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:37.337 05:10:53 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:37.337 05:10:53 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:37.337 05:10:53 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:37.337 05:10:53 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:37.337 05:10:53 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:37.337 05:10:53 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:37.337 05:10:53 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:37.337 05:10:53 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:37.337 05:10:53 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:37.337 05:10:53 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:37.337 05:10:53 -- common/autotest_common.sh@196 -- # cat 00:08:37.337 05:10:53 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:37.337 05:10:53 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:37.337 05:10:53 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:37.337 05:10:53 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:37.337 05:10:53 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:37.337 05:10:53 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:37.337 05:10:53 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:37.337 05:10:53 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:37.337 05:10:53 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:37.337 05:10:53 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:37.337 05:10:53 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:37.337 05:10:53 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:37.337 05:10:53 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:37.337 05:10:53 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:37.337 05:10:53 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:37.337 05:10:53 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:37.337 05:10:53 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:37.337 05:10:53 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:37.337 05:10:53 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:37.337 05:10:53 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:37.337 05:10:53 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:37.337 05:10:53 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:37.337 05:10:53 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:37.337 05:10:53 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:37.337 05:10:53 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:37.337 05:10:53 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:37.337 05:10:53 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:37.337 05:10:53 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:37.337 05:10:53 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:37.337 05:10:53 -- common/autotest_common.sh@259 -- # valgrind= 00:08:37.337 05:10:53 -- common/autotest_common.sh@265 -- # uname -s 00:08:37.337 05:10:53 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:37.337 05:10:53 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:37.337 05:10:53 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:37.337 05:10:53 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:37.337 05:10:53 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:37.337 05:10:53 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:37.337 05:10:53 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:37.337 05:10:53 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:37.337 05:10:53 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:37.337 05:10:53 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:37.337 05:10:53 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:37.337 05:10:53 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:37.337 05:10:53 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:37.337 05:10:53 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:37.337 05:10:53 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:37.337 05:10:53 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:08:37.337 05:10:53 -- common/autotest_common.sh@319 -- # [[ -z 1673979 ]] 00:08:37.337 05:10:53 -- common/autotest_common.sh@319 -- # kill -0 1673979 00:08:37.337 05:10:53 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:37.337 05:10:53 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:37.337 05:10:53 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:37.337 05:10:53 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:37.337 05:10:53 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:37.337 05:10:53 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:37.337 05:10:53 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:37.337 05:10:53 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:37.337 05:10:53 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.jEP7Kx 00:08:37.337 05:10:53 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:37.337 05:10:53 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:37.337 05:10:53 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:37.338 05:10:53 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.jEP7Kx/tests/target /tmp/spdk.jEP7Kx 00:08:37.338 05:10:53 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@328 -- # df -T 00:08:37.338 05:10:53 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:37.338 05:10:53 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:37.338 05:10:53 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # avails["$mount"]=54411354112 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730570240 00:08:37.338 05:10:53 -- common/autotest_common.sh@364 -- # uses["$mount"]=7319216128 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864027648 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865285120 00:08:37.338 05:10:53 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # avails["$mount"]=12336672768 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346114048 00:08:37.338 05:10:53 -- common/autotest_common.sh@364 -- # uses["$mount"]=9441280 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865063936 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865285120 00:08:37.338 05:10:53 -- common/autotest_common.sh@364 -- # uses["$mount"]=221184 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:37.338 05:10:53 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:37.338 05:10:53 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:37.338 05:10:53 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:37.338 05:10:53 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:37.338 * Looking for test storage... 00:08:37.338 05:10:53 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:37.338 05:10:53 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:37.338 05:10:53 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.338 05:10:53 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:37.338 05:10:53 -- common/autotest_common.sh@373 -- # mount=/ 00:08:37.338 05:10:53 -- common/autotest_common.sh@375 -- # target_space=54411354112 00:08:37.338 05:10:53 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:37.338 05:10:53 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:37.338 05:10:53 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:37.338 05:10:53 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:37.338 05:10:53 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:37.338 05:10:53 -- common/autotest_common.sh@382 -- # new_size=9533808640 00:08:37.338 05:10:53 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:37.338 05:10:53 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.338 05:10:53 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.338 05:10:53 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.338 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.338 05:10:53 -- common/autotest_common.sh@390 -- # return 0 00:08:37.338 05:10:53 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:37.338 05:10:53 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:37.338 05:10:53 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:37.338 05:10:53 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:37.338 05:10:53 -- common/autotest_common.sh@1682 -- # true 00:08:37.338 05:10:53 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:37.338 05:10:53 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:37.338 05:10:53 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:37.338 05:10:53 -- common/autotest_common.sh@27 -- # exec 00:08:37.338 05:10:53 -- common/autotest_common.sh@29 -- # exec 00:08:37.338 05:10:53 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:37.338 05:10:53 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:37.338 05:10:53 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:37.338 05:10:53 -- common/autotest_common.sh@18 -- # set -x 00:08:37.338 05:10:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.338 05:10:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.338 05:10:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.338 05:10:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.338 05:10:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.338 05:10:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.338 05:10:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.338 05:10:53 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.338 05:10:53 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.338 05:10:53 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.338 05:10:53 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.338 05:10:53 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.338 05:10:53 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.338 05:10:53 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.338 05:10:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.338 05:10:53 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.338 05:10:53 -- scripts/common.sh@344 -- # : 1 00:08:37.338 05:10:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.338 05:10:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.338 05:10:53 -- scripts/common.sh@364 -- # decimal 1 00:08:37.338 05:10:53 -- scripts/common.sh@352 -- # local d=1 00:08:37.338 05:10:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.338 05:10:53 -- scripts/common.sh@354 -- # echo 1 00:08:37.338 05:10:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.338 05:10:53 -- scripts/common.sh@365 -- # decimal 2 00:08:37.338 05:10:53 -- scripts/common.sh@352 -- # local d=2 00:08:37.338 05:10:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.338 05:10:53 -- scripts/common.sh@354 -- # echo 2 00:08:37.338 05:10:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.338 05:10:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.338 05:10:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.338 05:10:53 -- scripts/common.sh@367 -- # return 0 00:08:37.338 05:10:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.338 05:10:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.338 --rc genhtml_branch_coverage=1 00:08:37.338 --rc genhtml_function_coverage=1 00:08:37.339 --rc genhtml_legend=1 00:08:37.339 --rc geninfo_all_blocks=1 00:08:37.339 --rc geninfo_unexecuted_blocks=1 00:08:37.339 00:08:37.339 ' 00:08:37.339 05:10:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.339 --rc genhtml_branch_coverage=1 00:08:37.339 --rc genhtml_function_coverage=1 00:08:37.339 --rc genhtml_legend=1 00:08:37.339 --rc geninfo_all_blocks=1 00:08:37.339 --rc geninfo_unexecuted_blocks=1 00:08:37.339 00:08:37.339 ' 00:08:37.339 05:10:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.339 --rc genhtml_branch_coverage=1 00:08:37.339 --rc genhtml_function_coverage=1 00:08:37.339 --rc genhtml_legend=1 00:08:37.339 --rc geninfo_all_blocks=1 00:08:37.339 --rc geninfo_unexecuted_blocks=1 00:08:37.339 00:08:37.339 ' 00:08:37.339 05:10:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.339 --rc genhtml_branch_coverage=1 00:08:37.339 --rc genhtml_function_coverage=1 00:08:37.339 --rc genhtml_legend=1 00:08:37.339 --rc geninfo_all_blocks=1 00:08:37.339 --rc geninfo_unexecuted_blocks=1 00:08:37.339 00:08:37.339 ' 00:08:37.339 05:10:53 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.339 05:10:53 -- nvmf/common.sh@7 -- # uname -s 00:08:37.339 05:10:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.339 05:10:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.339 05:10:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.339 05:10:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.339 05:10:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.339 05:10:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.339 05:10:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.339 05:10:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.339 05:10:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.339 05:10:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.339 05:10:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:37.339 05:10:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:37.339 05:10:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.339 05:10:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.339 05:10:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.339 05:10:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:37.339 05:10:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.339 05:10:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.339 05:10:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.339 05:10:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.339 05:10:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.339 05:10:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.339 05:10:53 -- paths/export.sh@5 -- # export PATH 00:08:37.339 05:10:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.339 05:10:53 -- nvmf/common.sh@46 -- # : 0 00:08:37.339 05:10:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.339 05:10:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.339 05:10:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.339 05:10:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.339 05:10:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.339 05:10:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.339 05:10:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.339 05:10:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.339 05:10:53 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:37.339 05:10:53 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:37.339 05:10:53 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:37.339 05:10:53 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:37.339 05:10:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.339 05:10:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:37.339 05:10:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:37.339 05:10:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:37.339 05:10:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.339 05:10:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.339 05:10:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.339 05:10:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:37.340 05:10:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:37.340 05:10:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:37.340 05:10:53 -- common/autotest_common.sh@10 -- # set +x 00:08:43.916 05:11:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:43.916 05:11:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:43.916 05:11:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:43.916 05:11:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:43.916 05:11:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:43.916 05:11:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:43.916 05:11:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:43.916 05:11:00 -- nvmf/common.sh@294 -- # net_devs=() 00:08:43.916 05:11:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:43.916 05:11:00 -- nvmf/common.sh@295 -- # e810=() 00:08:43.916 05:11:00 -- nvmf/common.sh@295 -- # local -ga e810 00:08:43.916 05:11:00 -- nvmf/common.sh@296 -- # x722=() 00:08:43.916 05:11:00 -- nvmf/common.sh@296 -- # local -ga x722 00:08:43.916 05:11:00 -- nvmf/common.sh@297 -- # mlx=() 00:08:43.916 05:11:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:43.916 05:11:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.916 05:11:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:43.916 05:11:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:43.916 05:11:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:43.916 05:11:00 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:43.916 05:11:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:43.916 05:11:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:43.916 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:43.916 05:11:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.916 05:11:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:43.916 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:43.916 05:11:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.916 05:11:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:43.916 05:11:00 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.916 05:11:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:43.916 05:11:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.916 05:11:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:43.916 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:43.916 05:11:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.916 05:11:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.916 05:11:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:43.916 05:11:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.916 05:11:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:43.916 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:43.916 05:11:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.916 05:11:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:43.916 05:11:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:43.916 05:11:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:43.916 05:11:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:43.916 05:11:00 -- nvmf/common.sh@57 -- # uname 00:08:43.916 05:11:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:43.916 05:11:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:43.916 05:11:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:43.916 05:11:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:43.916 05:11:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:43.916 05:11:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:43.916 05:11:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:43.916 05:11:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:43.916 05:11:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:43.916 05:11:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:43.916 05:11:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:43.916 05:11:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:43.916 05:11:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:43.916 05:11:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:43.916 05:11:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:43.916 05:11:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:43.916 05:11:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:43.916 05:11:00 -- nvmf/common.sh@104 -- # continue 2 00:08:43.916 05:11:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.916 05:11:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:43.916 05:11:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:43.916 05:11:00 -- nvmf/common.sh@104 -- # continue 2 00:08:43.916 05:11:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:43.916 05:11:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:43.916 05:11:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:43.916 05:11:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:43.916 05:11:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:43.916 05:11:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:43.916 05:11:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:43.917 05:11:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:43.917 05:11:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:43.917 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:43.917 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:43.917 altname enp217s0f0np0 00:08:43.917 altname ens818f0np0 00:08:43.917 inet 192.168.100.8/24 scope global mlx_0_0 00:08:43.917 valid_lft forever preferred_lft forever 00:08:43.917 05:11:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:43.917 05:11:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:43.917 05:11:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:43.917 05:11:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:43.917 05:11:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:43.917 05:11:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:43.917 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:43.917 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:43.917 altname enp217s0f1np1 00:08:43.917 altname ens818f1np1 00:08:43.917 inet 192.168.100.9/24 scope global mlx_0_1 00:08:43.917 valid_lft forever preferred_lft forever 00:08:43.917 05:11:00 -- nvmf/common.sh@410 -- # return 0 00:08:43.917 05:11:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:43.917 05:11:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:43.917 05:11:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:43.917 05:11:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:43.917 05:11:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:43.917 05:11:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:43.917 05:11:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:43.917 05:11:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:43.917 05:11:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:43.917 05:11:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:43.917 05:11:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:43.917 05:11:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.917 05:11:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:43.917 05:11:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:43.917 05:11:00 -- nvmf/common.sh@104 -- # continue 2 00:08:43.917 05:11:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:43.917 05:11:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.917 05:11:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:43.917 05:11:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.917 05:11:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:43.917 05:11:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:43.917 05:11:00 -- nvmf/common.sh@104 -- # continue 2 00:08:43.917 05:11:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:43.917 05:11:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:43.917 05:11:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:43.917 05:11:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:43.917 05:11:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:43.917 05:11:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:43.917 05:11:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:43.917 05:11:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:43.917 192.168.100.9' 00:08:43.917 05:11:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:43.917 192.168.100.9' 00:08:43.917 05:11:00 -- nvmf/common.sh@445 -- # head -n 1 00:08:43.917 05:11:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:43.917 05:11:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:43.917 192.168.100.9' 00:08:43.917 05:11:00 -- nvmf/common.sh@446 -- # tail -n +2 00:08:43.917 05:11:00 -- nvmf/common.sh@446 -- # head -n 1 00:08:43.917 05:11:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:43.917 05:11:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:43.917 05:11:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:43.917 05:11:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:43.917 05:11:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:43.917 05:11:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:43.917 05:11:00 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:43.917 05:11:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.917 05:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.917 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:43.917 ************************************ 00:08:43.917 START TEST nvmf_filesystem_no_in_capsule 00:08:43.917 ************************************ 00:08:43.917 05:11:00 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:43.917 05:11:00 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:43.917 05:11:00 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:43.917 05:11:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.917 05:11:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.917 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:43.917 05:11:00 -- nvmf/common.sh@469 -- # nvmfpid=1677365 00:08:43.917 05:11:00 -- nvmf/common.sh@470 -- # waitforlisten 1677365 00:08:43.917 05:11:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.917 05:11:00 -- common/autotest_common.sh@829 -- # '[' -z 1677365 ']' 00:08:43.917 05:11:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.917 05:11:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.917 05:11:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.917 05:11:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.917 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:43.917 [2024-11-19 05:11:00.385774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.917 [2024-11-19 05:11:00.385822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.917 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.917 [2024-11-19 05:11:00.457019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.177 [2024-11-19 05:11:00.496353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.177 [2024-11-19 05:11:00.496469] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.177 [2024-11-19 05:11:00.496479] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.177 [2024-11-19 05:11:00.496489] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.177 [2024-11-19 05:11:00.496582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.177 [2024-11-19 05:11:00.496631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.177 [2024-11-19 05:11:00.496715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.177 [2024-11-19 05:11:00.496716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.745 05:11:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.745 05:11:01 -- common/autotest_common.sh@862 -- # return 0 00:08:44.745 05:11:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:44.745 05:11:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.745 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:44.745 05:11:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.745 05:11:01 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:44.745 05:11:01 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:44.745 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.745 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:44.745 [2024-11-19 05:11:01.263023] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:44.745 [2024-11-19 05:11:01.284152] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2370200/0x23746f0) succeed. 00:08:44.745 [2024-11-19 05:11:01.293236] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23717f0/0x23b5d90) succeed. 00:08:45.004 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.004 05:11:01 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:45.004 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.004 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:45.004 Malloc1 00:08:45.004 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.004 05:11:01 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:45.004 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.004 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:45.004 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.004 05:11:01 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:45.004 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.004 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:45.004 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.004 05:11:01 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:45.004 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.004 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:45.004 [2024-11-19 05:11:01.548392] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:45.004 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.004 05:11:01 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:45.004 05:11:01 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:45.004 05:11:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:45.004 05:11:01 -- common/autotest_common.sh@1369 -- # local bs 00:08:45.004 05:11:01 -- common/autotest_common.sh@1370 -- # local nb 00:08:45.004 05:11:01 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:45.004 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.004 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:45.264 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.264 05:11:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:45.264 { 00:08:45.264 "name": "Malloc1", 00:08:45.264 "aliases": [ 00:08:45.264 "768f496a-212b-4cbf-9505-d995b5ff5ddb" 00:08:45.264 ], 00:08:45.264 "product_name": "Malloc disk", 00:08:45.264 "block_size": 512, 00:08:45.264 "num_blocks": 1048576, 00:08:45.264 "uuid": "768f496a-212b-4cbf-9505-d995b5ff5ddb", 00:08:45.264 "assigned_rate_limits": { 00:08:45.264 "rw_ios_per_sec": 0, 00:08:45.264 "rw_mbytes_per_sec": 0, 00:08:45.264 "r_mbytes_per_sec": 0, 00:08:45.264 "w_mbytes_per_sec": 0 00:08:45.264 }, 00:08:45.264 "claimed": true, 00:08:45.264 "claim_type": "exclusive_write", 00:08:45.264 "zoned": false, 00:08:45.264 "supported_io_types": { 00:08:45.264 "read": true, 00:08:45.264 "write": true, 00:08:45.264 "unmap": true, 00:08:45.264 "write_zeroes": true, 00:08:45.264 "flush": true, 00:08:45.264 "reset": true, 00:08:45.264 "compare": false, 00:08:45.264 "compare_and_write": false, 00:08:45.264 "abort": true, 00:08:45.264 "nvme_admin": false, 00:08:45.264 "nvme_io": false 00:08:45.264 }, 00:08:45.264 "memory_domains": [ 00:08:45.264 { 00:08:45.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.264 "dma_device_type": 2 00:08:45.264 } 00:08:45.264 ], 00:08:45.264 "driver_specific": {} 00:08:45.264 } 00:08:45.264 ]' 00:08:45.264 05:11:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:45.264 05:11:01 -- common/autotest_common.sh@1372 -- # bs=512 00:08:45.264 05:11:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:45.264 05:11:01 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:45.264 05:11:01 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:45.264 05:11:01 -- common/autotest_common.sh@1377 -- # echo 512 00:08:45.264 05:11:01 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:45.264 05:11:01 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:46.202 05:11:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.202 05:11:02 -- common/autotest_common.sh@1187 -- # local i=0 00:08:46.202 05:11:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.202 05:11:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:46.202 05:11:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:48.738 05:11:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:48.738 05:11:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:48.738 05:11:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:48.738 05:11:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:48.738 05:11:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:48.738 05:11:04 -- common/autotest_common.sh@1197 -- # return 0 00:08:48.738 05:11:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:48.738 05:11:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:48.738 05:11:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:48.738 05:11:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:48.738 05:11:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:48.738 05:11:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:48.738 05:11:04 -- setup/common.sh@80 -- # echo 536870912 00:08:48.738 05:11:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:48.738 05:11:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:48.738 05:11:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:48.738 05:11:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:48.738 05:11:04 -- target/filesystem.sh@69 -- # partprobe 00:08:48.738 05:11:04 -- target/filesystem.sh@70 -- # sleep 1 00:08:49.675 05:11:05 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:49.675 05:11:05 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:49.675 05:11:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:49.675 05:11:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.675 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:08:49.675 ************************************ 00:08:49.675 START TEST filesystem_ext4 00:08:49.675 ************************************ 00:08:49.675 05:11:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:49.675 05:11:05 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:49.675 05:11:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.675 05:11:05 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:49.675 05:11:05 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:49.675 05:11:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:49.675 05:11:05 -- common/autotest_common.sh@914 -- # local i=0 00:08:49.675 05:11:05 -- common/autotest_common.sh@915 -- # local force 00:08:49.675 05:11:05 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:49.675 05:11:05 -- common/autotest_common.sh@918 -- # force=-F 00:08:49.675 05:11:05 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:49.675 mke2fs 1.47.0 (5-Feb-2023) 00:08:49.675 Discarding device blocks: 0/522240 done 00:08:49.675 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:49.675 Filesystem UUID: 20006fba-57c8-4aa3-adcf-846a540e6cb4 00:08:49.675 Superblock backups stored on blocks: 00:08:49.675 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:49.675 00:08:49.675 Allocating group tables: 0/64 done 00:08:49.675 Writing inode tables: 0/64 done 00:08:49.675 Creating journal (8192 blocks): done 00:08:49.675 Writing superblocks and filesystem accounting information: 0/64 done 00:08:49.675 00:08:49.675 05:11:06 -- common/autotest_common.sh@931 -- # return 0 00:08:49.676 05:11:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:49.676 05:11:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:49.676 05:11:06 -- target/filesystem.sh@25 -- # sync 00:08:49.676 05:11:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:49.676 05:11:06 -- target/filesystem.sh@27 -- # sync 00:08:49.676 05:11:06 -- target/filesystem.sh@29 -- # i=0 00:08:49.676 05:11:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:49.676 05:11:06 -- target/filesystem.sh@37 -- # kill -0 1677365 00:08:49.676 05:11:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:49.676 05:11:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:49.676 05:11:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:49.676 05:11:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:49.676 00:08:49.676 real 0m0.192s 00:08:49.676 user 0m0.029s 00:08:49.676 sys 0m0.071s 00:08:49.676 05:11:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.676 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:08:49.676 ************************************ 00:08:49.676 END TEST filesystem_ext4 00:08:49.676 ************************************ 00:08:49.676 05:11:06 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:49.676 05:11:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:49.676 05:11:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.676 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:08:49.676 ************************************ 00:08:49.676 START TEST filesystem_btrfs 00:08:49.676 ************************************ 00:08:49.676 05:11:06 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:49.676 05:11:06 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:49.676 05:11:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.676 05:11:06 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:49.676 05:11:06 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:49.676 05:11:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:49.676 05:11:06 -- common/autotest_common.sh@914 -- # local i=0 00:08:49.676 05:11:06 -- common/autotest_common.sh@915 -- # local force 00:08:49.676 05:11:06 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:49.676 05:11:06 -- common/autotest_common.sh@920 -- # force=-f 00:08:49.676 05:11:06 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:49.935 btrfs-progs v6.8.1 00:08:49.935 See https://btrfs.readthedocs.io for more information. 00:08:49.935 00:08:49.935 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:49.935 NOTE: several default settings have changed in version 5.15, please make sure 00:08:49.935 this does not affect your deployments: 00:08:49.935 - DUP for metadata (-m dup) 00:08:49.935 - enabled no-holes (-O no-holes) 00:08:49.935 - enabled free-space-tree (-R free-space-tree) 00:08:49.935 00:08:49.935 Label: (null) 00:08:49.935 UUID: 00d144b4-a540-4df5-818f-6559e2f4aea9 00:08:49.935 Node size: 16384 00:08:49.935 Sector size: 4096 (CPU page size: 4096) 00:08:49.935 Filesystem size: 510.00MiB 00:08:49.935 Block group profiles: 00:08:49.935 Data: single 8.00MiB 00:08:49.935 Metadata: DUP 32.00MiB 00:08:49.935 System: DUP 8.00MiB 00:08:49.935 SSD detected: yes 00:08:49.935 Zoned device: no 00:08:49.935 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:49.935 Checksum: crc32c 00:08:49.935 Number of devices: 1 00:08:49.935 Devices: 00:08:49.935 ID SIZE PATH 00:08:49.935 1 510.00MiB /dev/nvme0n1p1 00:08:49.935 00:08:49.935 05:11:06 -- common/autotest_common.sh@931 -- # return 0 00:08:49.935 05:11:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:49.935 05:11:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:49.935 05:11:06 -- target/filesystem.sh@25 -- # sync 00:08:49.935 05:11:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:49.935 05:11:06 -- target/filesystem.sh@27 -- # sync 00:08:49.935 05:11:06 -- target/filesystem.sh@29 -- # i=0 00:08:49.935 05:11:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:49.935 05:11:06 -- target/filesystem.sh@37 -- # kill -0 1677365 00:08:49.935 05:11:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:49.935 05:11:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:49.935 05:11:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:49.935 05:11:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:49.935 00:08:49.935 real 0m0.241s 00:08:49.935 user 0m0.038s 00:08:49.935 sys 0m0.111s 00:08:49.935 05:11:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.935 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:08:49.935 ************************************ 00:08:49.935 END TEST filesystem_btrfs 00:08:49.935 ************************************ 00:08:49.935 05:11:06 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:49.935 05:11:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:49.935 05:11:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.935 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:08:49.935 ************************************ 00:08:49.935 START TEST filesystem_xfs 00:08:49.935 ************************************ 00:08:49.935 05:11:06 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:49.935 05:11:06 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:49.935 05:11:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.935 05:11:06 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:49.935 05:11:06 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:49.935 05:11:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:49.935 05:11:06 -- common/autotest_common.sh@914 -- # local i=0 00:08:49.935 05:11:06 -- common/autotest_common.sh@915 -- # local force 00:08:49.935 05:11:06 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:49.935 05:11:06 -- common/autotest_common.sh@920 -- # force=-f 00:08:49.935 05:11:06 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:50.195 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:50.195 = sectsz=512 attr=2, projid32bit=1 00:08:50.195 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:50.195 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:50.195 data = bsize=4096 blocks=130560, imaxpct=25 00:08:50.195 = sunit=0 swidth=0 blks 00:08:50.195 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:50.195 log =internal log bsize=4096 blocks=16384, version=2 00:08:50.195 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:50.195 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:50.195 Discarding blocks...Done. 00:08:50.195 05:11:06 -- common/autotest_common.sh@931 -- # return 0 00:08:50.195 05:11:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:50.195 05:11:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:50.195 05:11:06 -- target/filesystem.sh@25 -- # sync 00:08:50.195 05:11:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:50.195 05:11:06 -- target/filesystem.sh@27 -- # sync 00:08:50.195 05:11:06 -- target/filesystem.sh@29 -- # i=0 00:08:50.195 05:11:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:50.195 05:11:06 -- target/filesystem.sh@37 -- # kill -0 1677365 00:08:50.195 05:11:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:50.195 05:11:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:50.195 05:11:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:50.195 05:11:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:50.195 00:08:50.195 real 0m0.203s 00:08:50.195 user 0m0.027s 00:08:50.195 sys 0m0.082s 00:08:50.195 05:11:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.195 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.195 ************************************ 00:08:50.195 END TEST filesystem_xfs 00:08:50.195 ************************************ 00:08:50.195 05:11:06 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:50.195 05:11:06 -- target/filesystem.sh@93 -- # sync 00:08:50.195 05:11:06 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:51.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.577 05:11:07 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:51.577 05:11:07 -- common/autotest_common.sh@1208 -- # local i=0 00:08:51.577 05:11:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:51.577 05:11:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.578 05:11:07 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:51.578 05:11:07 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.578 05:11:07 -- common/autotest_common.sh@1220 -- # return 0 00:08:51.578 05:11:07 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.578 05:11:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.578 05:11:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.578 05:11:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.578 05:11:07 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:51.578 05:11:07 -- target/filesystem.sh@101 -- # killprocess 1677365 00:08:51.578 05:11:07 -- common/autotest_common.sh@936 -- # '[' -z 1677365 ']' 00:08:51.578 05:11:07 -- common/autotest_common.sh@940 -- # kill -0 1677365 00:08:51.578 05:11:07 -- common/autotest_common.sh@941 -- # uname 00:08:51.578 05:11:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:51.578 05:11:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1677365 00:08:51.578 05:11:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:51.578 05:11:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:51.578 05:11:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1677365' 00:08:51.578 killing process with pid 1677365 00:08:51.578 05:11:07 -- common/autotest_common.sh@955 -- # kill 1677365 00:08:51.578 05:11:07 -- common/autotest_common.sh@960 -- # wait 1677365 00:08:51.837 05:11:08 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:51.837 00:08:51.837 real 0m7.870s 00:08:51.837 user 0m30.812s 00:08:51.837 sys 0m1.157s 00:08:51.837 05:11:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.837 05:11:08 -- common/autotest_common.sh@10 -- # set +x 00:08:51.837 ************************************ 00:08:51.837 END TEST nvmf_filesystem_no_in_capsule 00:08:51.837 ************************************ 00:08:51.837 05:11:08 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:51.837 05:11:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:51.837 05:11:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.837 05:11:08 -- common/autotest_common.sh@10 -- # set +x 00:08:51.837 ************************************ 00:08:51.837 START TEST nvmf_filesystem_in_capsule 00:08:51.837 ************************************ 00:08:51.837 05:11:08 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:51.837 05:11:08 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:51.837 05:11:08 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:51.837 05:11:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:51.837 05:11:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.837 05:11:08 -- common/autotest_common.sh@10 -- # set +x 00:08:51.837 05:11:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.837 05:11:08 -- nvmf/common.sh@469 -- # nvmfpid=1678983 00:08:51.837 05:11:08 -- nvmf/common.sh@470 -- # waitforlisten 1678983 00:08:51.837 05:11:08 -- common/autotest_common.sh@829 -- # '[' -z 1678983 ']' 00:08:51.837 05:11:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.837 05:11:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.837 05:11:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.837 05:11:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.837 05:11:08 -- common/autotest_common.sh@10 -- # set +x 00:08:51.837 [2024-11-19 05:11:08.287544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.837 [2024-11-19 05:11:08.287594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.837 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.837 [2024-11-19 05:11:08.352397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.837 [2024-11-19 05:11:08.390958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.837 [2024-11-19 05:11:08.391063] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.837 [2024-11-19 05:11:08.391072] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.837 [2024-11-19 05:11:08.391081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.837 [2024-11-19 05:11:08.391126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.837 [2024-11-19 05:11:08.391244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.837 [2024-11-19 05:11:08.391307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.837 [2024-11-19 05:11:08.391309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.774 05:11:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.774 05:11:09 -- common/autotest_common.sh@862 -- # return 0 00:08:52.774 05:11:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:52.774 05:11:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.774 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:52.774 05:11:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.775 05:11:09 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:52.775 05:11:09 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:52.775 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.775 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:52.775 [2024-11-19 05:11:09.199012] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20b0200/0x20b46f0) succeed. 00:08:52.775 [2024-11-19 05:11:09.208414] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20b17f0/0x20f5d90) succeed. 00:08:52.775 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.775 05:11:09 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:52.775 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.775 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.033 Malloc1 00:08:53.033 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.033 05:11:09 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.033 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.033 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.033 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.033 05:11:09 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.033 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.033 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.033 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.033 05:11:09 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:53.033 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.033 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.033 [2024-11-19 05:11:09.472417] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:53.033 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.033 05:11:09 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:53.033 05:11:09 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:53.033 05:11:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:53.033 05:11:09 -- common/autotest_common.sh@1369 -- # local bs 00:08:53.033 05:11:09 -- common/autotest_common.sh@1370 -- # local nb 00:08:53.033 05:11:09 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:53.033 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.033 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.033 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.033 05:11:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:53.033 { 00:08:53.033 "name": "Malloc1", 00:08:53.033 "aliases": [ 00:08:53.033 "31777957-e101-4118-80e1-c654119accee" 00:08:53.033 ], 00:08:53.033 "product_name": "Malloc disk", 00:08:53.033 "block_size": 512, 00:08:53.033 "num_blocks": 1048576, 00:08:53.033 "uuid": "31777957-e101-4118-80e1-c654119accee", 00:08:53.033 "assigned_rate_limits": { 00:08:53.033 "rw_ios_per_sec": 0, 00:08:53.033 "rw_mbytes_per_sec": 0, 00:08:53.033 "r_mbytes_per_sec": 0, 00:08:53.033 "w_mbytes_per_sec": 0 00:08:53.033 }, 00:08:53.033 "claimed": true, 00:08:53.033 "claim_type": "exclusive_write", 00:08:53.033 "zoned": false, 00:08:53.033 "supported_io_types": { 00:08:53.033 "read": true, 00:08:53.033 "write": true, 00:08:53.033 "unmap": true, 00:08:53.033 "write_zeroes": true, 00:08:53.033 "flush": true, 00:08:53.033 "reset": true, 00:08:53.033 "compare": false, 00:08:53.033 "compare_and_write": false, 00:08:53.033 "abort": true, 00:08:53.033 "nvme_admin": false, 00:08:53.033 "nvme_io": false 00:08:53.033 }, 00:08:53.033 "memory_domains": [ 00:08:53.033 { 00:08:53.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.033 "dma_device_type": 2 00:08:53.033 } 00:08:53.033 ], 00:08:53.033 "driver_specific": {} 00:08:53.033 } 00:08:53.033 ]' 00:08:53.033 05:11:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:53.033 05:11:09 -- common/autotest_common.sh@1372 -- # bs=512 00:08:53.033 05:11:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:53.033 05:11:09 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:53.033 05:11:09 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:53.033 05:11:09 -- common/autotest_common.sh@1377 -- # echo 512 00:08:53.033 05:11:09 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:53.033 05:11:09 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:54.412 05:11:10 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.412 05:11:10 -- common/autotest_common.sh@1187 -- # local i=0 00:08:54.412 05:11:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.412 05:11:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:54.412 05:11:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:56.320 05:11:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:56.320 05:11:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:56.320 05:11:12 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.320 05:11:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:56.320 05:11:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.320 05:11:12 -- common/autotest_common.sh@1197 -- # return 0 00:08:56.320 05:11:12 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:56.320 05:11:12 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:56.320 05:11:12 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:56.320 05:11:12 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:56.320 05:11:12 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:56.320 05:11:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:56.320 05:11:12 -- setup/common.sh@80 -- # echo 536870912 00:08:56.320 05:11:12 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:56.320 05:11:12 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:56.320 05:11:12 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:56.320 05:11:12 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:56.320 05:11:12 -- target/filesystem.sh@69 -- # partprobe 00:08:56.320 05:11:12 -- target/filesystem.sh@70 -- # sleep 1 00:08:57.258 05:11:13 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:57.258 05:11:13 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:57.258 05:11:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:57.258 05:11:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.258 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:08:57.258 ************************************ 00:08:57.258 START TEST filesystem_in_capsule_ext4 00:08:57.258 ************************************ 00:08:57.258 05:11:13 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:57.258 05:11:13 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:57.258 05:11:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.258 05:11:13 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:57.258 05:11:13 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:57.258 05:11:13 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:57.258 05:11:13 -- common/autotest_common.sh@914 -- # local i=0 00:08:57.258 05:11:13 -- common/autotest_common.sh@915 -- # local force 00:08:57.258 05:11:13 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:57.258 05:11:13 -- common/autotest_common.sh@918 -- # force=-F 00:08:57.258 05:11:13 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:57.258 mke2fs 1.47.0 (5-Feb-2023) 00:08:57.537 Discarding device blocks: 0/522240 done 00:08:57.537 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:57.537 Filesystem UUID: 73034bfd-27de-4821-a33b-324026738d0e 00:08:57.537 Superblock backups stored on blocks: 00:08:57.537 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:57.537 00:08:57.537 Allocating group tables: 0/64 done 00:08:57.537 Writing inode tables: 0/64 done 00:08:57.537 Creating journal (8192 blocks): done 00:08:57.537 Writing superblocks and filesystem accounting information: 0/64 done 00:08:57.537 00:08:57.537 05:11:13 -- common/autotest_common.sh@931 -- # return 0 00:08:57.537 05:11:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.537 05:11:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:57.537 05:11:13 -- target/filesystem.sh@25 -- # sync 00:08:57.537 05:11:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:57.537 05:11:13 -- target/filesystem.sh@27 -- # sync 00:08:57.537 05:11:13 -- target/filesystem.sh@29 -- # i=0 00:08:57.537 05:11:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:57.537 05:11:13 -- target/filesystem.sh@37 -- # kill -0 1678983 00:08:57.537 05:11:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:57.537 05:11:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:57.537 05:11:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:57.537 05:11:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:57.537 00:08:57.537 real 0m0.199s 00:08:57.537 user 0m0.025s 00:08:57.537 sys 0m0.084s 00:08:57.537 05:11:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.537 05:11:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.537 ************************************ 00:08:57.537 END TEST filesystem_in_capsule_ext4 00:08:57.537 ************************************ 00:08:57.537 05:11:14 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:57.537 05:11:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:57.537 05:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.537 05:11:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.537 ************************************ 00:08:57.537 START TEST filesystem_in_capsule_btrfs 00:08:57.537 ************************************ 00:08:57.537 05:11:14 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:57.537 05:11:14 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:57.537 05:11:14 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.537 05:11:14 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:57.537 05:11:14 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:57.537 05:11:14 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:57.537 05:11:14 -- common/autotest_common.sh@914 -- # local i=0 00:08:57.537 05:11:14 -- common/autotest_common.sh@915 -- # local force 00:08:57.538 05:11:14 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:57.538 05:11:14 -- common/autotest_common.sh@920 -- # force=-f 00:08:57.538 05:11:14 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:57.810 btrfs-progs v6.8.1 00:08:57.810 See https://btrfs.readthedocs.io for more information. 00:08:57.810 00:08:57.810 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:57.810 NOTE: several default settings have changed in version 5.15, please make sure 00:08:57.810 this does not affect your deployments: 00:08:57.810 - DUP for metadata (-m dup) 00:08:57.810 - enabled no-holes (-O no-holes) 00:08:57.810 - enabled free-space-tree (-R free-space-tree) 00:08:57.810 00:08:57.810 Label: (null) 00:08:57.810 UUID: a6d74be9-5046-470d-a439-bf0807048197 00:08:57.810 Node size: 16384 00:08:57.810 Sector size: 4096 (CPU page size: 4096) 00:08:57.810 Filesystem size: 510.00MiB 00:08:57.810 Block group profiles: 00:08:57.810 Data: single 8.00MiB 00:08:57.810 Metadata: DUP 32.00MiB 00:08:57.810 System: DUP 8.00MiB 00:08:57.810 SSD detected: yes 00:08:57.810 Zoned device: no 00:08:57.810 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:57.810 Checksum: crc32c 00:08:57.810 Number of devices: 1 00:08:57.810 Devices: 00:08:57.810 ID SIZE PATH 00:08:57.810 1 510.00MiB /dev/nvme0n1p1 00:08:57.810 00:08:57.810 05:11:14 -- common/autotest_common.sh@931 -- # return 0 00:08:57.810 05:11:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.810 05:11:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:57.810 05:11:14 -- target/filesystem.sh@25 -- # sync 00:08:57.810 05:11:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:57.810 05:11:14 -- target/filesystem.sh@27 -- # sync 00:08:57.810 05:11:14 -- target/filesystem.sh@29 -- # i=0 00:08:57.810 05:11:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:57.810 05:11:14 -- target/filesystem.sh@37 -- # kill -0 1678983 00:08:57.810 05:11:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:57.810 05:11:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:57.810 05:11:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:57.810 05:11:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:57.810 00:08:57.810 real 0m0.252s 00:08:57.810 user 0m0.027s 00:08:57.810 sys 0m0.135s 00:08:57.810 05:11:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.810 05:11:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.810 ************************************ 00:08:57.810 END TEST filesystem_in_capsule_btrfs 00:08:57.810 ************************************ 00:08:57.810 05:11:14 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:57.810 05:11:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:57.810 05:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.810 05:11:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.810 ************************************ 00:08:57.810 START TEST filesystem_in_capsule_xfs 00:08:57.810 ************************************ 00:08:57.810 05:11:14 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:57.810 05:11:14 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:57.810 05:11:14 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.810 05:11:14 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:57.810 05:11:14 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:57.810 05:11:14 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:57.810 05:11:14 -- common/autotest_common.sh@914 -- # local i=0 00:08:57.810 05:11:14 -- common/autotest_common.sh@915 -- # local force 00:08:57.810 05:11:14 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:57.810 05:11:14 -- common/autotest_common.sh@920 -- # force=-f 00:08:57.810 05:11:14 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:58.071 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:58.071 = sectsz=512 attr=2, projid32bit=1 00:08:58.071 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:58.071 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:58.071 data = bsize=4096 blocks=130560, imaxpct=25 00:08:58.071 = sunit=0 swidth=0 blks 00:08:58.071 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:58.071 log =internal log bsize=4096 blocks=16384, version=2 00:08:58.071 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:58.071 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:58.071 Discarding blocks...Done. 00:08:58.071 05:11:14 -- common/autotest_common.sh@931 -- # return 0 00:08:58.071 05:11:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.071 05:11:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.071 05:11:14 -- target/filesystem.sh@25 -- # sync 00:08:58.071 05:11:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.071 05:11:14 -- target/filesystem.sh@27 -- # sync 00:08:58.071 05:11:14 -- target/filesystem.sh@29 -- # i=0 00:08:58.071 05:11:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.071 05:11:14 -- target/filesystem.sh@37 -- # kill -0 1678983 00:08:58.071 05:11:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.071 05:11:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.071 05:11:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.071 05:11:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.071 00:08:58.071 real 0m0.204s 00:08:58.071 user 0m0.026s 00:08:58.071 sys 0m0.084s 00:08:58.071 05:11:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.071 05:11:14 -- common/autotest_common.sh@10 -- # set +x 00:08:58.071 ************************************ 00:08:58.071 END TEST filesystem_in_capsule_xfs 00:08:58.071 ************************************ 00:08:58.071 05:11:14 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:58.071 05:11:14 -- target/filesystem.sh@93 -- # sync 00:08:58.331 05:11:14 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.271 05:11:15 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.271 05:11:15 -- common/autotest_common.sh@1208 -- # local i=0 00:08:59.271 05:11:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:59.271 05:11:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.271 05:11:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:59.271 05:11:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.271 05:11:15 -- common/autotest_common.sh@1220 -- # return 0 00:08:59.271 05:11:15 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.271 05:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.271 05:11:15 -- common/autotest_common.sh@10 -- # set +x 00:08:59.271 05:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.271 05:11:15 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:59.271 05:11:15 -- target/filesystem.sh@101 -- # killprocess 1678983 00:08:59.271 05:11:15 -- common/autotest_common.sh@936 -- # '[' -z 1678983 ']' 00:08:59.271 05:11:15 -- common/autotest_common.sh@940 -- # kill -0 1678983 00:08:59.271 05:11:15 -- common/autotest_common.sh@941 -- # uname 00:08:59.271 05:11:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:59.271 05:11:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1678983 00:08:59.271 05:11:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:59.271 05:11:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:59.271 05:11:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1678983' 00:08:59.271 killing process with pid 1678983 00:08:59.271 05:11:15 -- common/autotest_common.sh@955 -- # kill 1678983 00:08:59.271 05:11:15 -- common/autotest_common.sh@960 -- # wait 1678983 00:08:59.841 05:11:16 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:59.841 00:08:59.841 real 0m7.860s 00:08:59.841 user 0m30.766s 00:08:59.841 sys 0m1.237s 00:08:59.841 05:11:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.841 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:08:59.841 ************************************ 00:08:59.841 END TEST nvmf_filesystem_in_capsule 00:08:59.841 ************************************ 00:08:59.841 05:11:16 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:59.841 05:11:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:59.841 05:11:16 -- nvmf/common.sh@116 -- # sync 00:08:59.841 05:11:16 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:59.841 05:11:16 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:59.841 05:11:16 -- nvmf/common.sh@119 -- # set +e 00:08:59.841 05:11:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:59.841 05:11:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:59.841 rmmod nvme_rdma 00:08:59.841 rmmod nvme_fabrics 00:08:59.841 05:11:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:59.841 05:11:16 -- nvmf/common.sh@123 -- # set -e 00:08:59.841 05:11:16 -- nvmf/common.sh@124 -- # return 0 00:08:59.841 05:11:16 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:59.841 05:11:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:59.841 05:11:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:59.841 00:08:59.841 real 0m22.989s 00:08:59.841 user 1m3.766s 00:08:59.841 sys 0m7.704s 00:08:59.841 05:11:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.841 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:08:59.841 ************************************ 00:08:59.841 END TEST nvmf_filesystem 00:08:59.841 ************************************ 00:08:59.841 05:11:16 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:59.841 05:11:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:59.841 05:11:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.841 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:08:59.841 ************************************ 00:08:59.841 START TEST nvmf_discovery 00:08:59.841 ************************************ 00:08:59.841 05:11:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:59.841 * Looking for test storage... 00:08:59.841 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.841 05:11:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:59.841 05:11:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:59.841 05:11:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:00.102 05:11:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:00.102 05:11:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:00.102 05:11:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:00.102 05:11:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:00.102 05:11:16 -- scripts/common.sh@335 -- # IFS=.-: 00:09:00.102 05:11:16 -- scripts/common.sh@335 -- # read -ra ver1 00:09:00.102 05:11:16 -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.102 05:11:16 -- scripts/common.sh@336 -- # read -ra ver2 00:09:00.102 05:11:16 -- scripts/common.sh@337 -- # local 'op=<' 00:09:00.102 05:11:16 -- scripts/common.sh@339 -- # ver1_l=2 00:09:00.102 05:11:16 -- scripts/common.sh@340 -- # ver2_l=1 00:09:00.102 05:11:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:00.102 05:11:16 -- scripts/common.sh@343 -- # case "$op" in 00:09:00.102 05:11:16 -- scripts/common.sh@344 -- # : 1 00:09:00.102 05:11:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:00.102 05:11:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.102 05:11:16 -- scripts/common.sh@364 -- # decimal 1 00:09:00.102 05:11:16 -- scripts/common.sh@352 -- # local d=1 00:09:00.102 05:11:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.102 05:11:16 -- scripts/common.sh@354 -- # echo 1 00:09:00.102 05:11:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:00.102 05:11:16 -- scripts/common.sh@365 -- # decimal 2 00:09:00.102 05:11:16 -- scripts/common.sh@352 -- # local d=2 00:09:00.102 05:11:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.102 05:11:16 -- scripts/common.sh@354 -- # echo 2 00:09:00.102 05:11:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:00.102 05:11:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:00.102 05:11:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:00.102 05:11:16 -- scripts/common.sh@367 -- # return 0 00:09:00.102 05:11:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.102 05:11:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.102 --rc genhtml_branch_coverage=1 00:09:00.102 --rc genhtml_function_coverage=1 00:09:00.102 --rc genhtml_legend=1 00:09:00.102 --rc geninfo_all_blocks=1 00:09:00.102 --rc geninfo_unexecuted_blocks=1 00:09:00.102 00:09:00.102 ' 00:09:00.102 05:11:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.102 --rc genhtml_branch_coverage=1 00:09:00.102 --rc genhtml_function_coverage=1 00:09:00.102 --rc genhtml_legend=1 00:09:00.102 --rc geninfo_all_blocks=1 00:09:00.102 --rc geninfo_unexecuted_blocks=1 00:09:00.102 00:09:00.102 ' 00:09:00.102 05:11:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.102 --rc genhtml_branch_coverage=1 00:09:00.102 --rc genhtml_function_coverage=1 00:09:00.102 --rc genhtml_legend=1 00:09:00.102 --rc geninfo_all_blocks=1 00:09:00.102 --rc geninfo_unexecuted_blocks=1 00:09:00.102 00:09:00.102 ' 00:09:00.102 05:11:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.102 --rc genhtml_branch_coverage=1 00:09:00.102 --rc genhtml_function_coverage=1 00:09:00.102 --rc genhtml_legend=1 00:09:00.102 --rc geninfo_all_blocks=1 00:09:00.102 --rc geninfo_unexecuted_blocks=1 00:09:00.102 00:09:00.102 ' 00:09:00.102 05:11:16 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.102 05:11:16 -- nvmf/common.sh@7 -- # uname -s 00:09:00.102 05:11:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.102 05:11:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.102 05:11:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.102 05:11:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.102 05:11:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.102 05:11:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.102 05:11:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.102 05:11:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.102 05:11:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.102 05:11:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.102 05:11:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:00.102 05:11:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:00.102 05:11:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.102 05:11:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.102 05:11:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.102 05:11:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:00.102 05:11:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.102 05:11:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.102 05:11:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.102 05:11:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.102 05:11:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.102 05:11:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.102 05:11:16 -- paths/export.sh@5 -- # export PATH 00:09:00.102 05:11:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.102 05:11:16 -- nvmf/common.sh@46 -- # : 0 00:09:00.102 05:11:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:00.102 05:11:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:00.102 05:11:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:00.102 05:11:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.102 05:11:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.102 05:11:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:00.102 05:11:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:00.102 05:11:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:00.102 05:11:16 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:00.102 05:11:16 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:00.102 05:11:16 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:00.102 05:11:16 -- target/discovery.sh@15 -- # hash nvme 00:09:00.102 05:11:16 -- target/discovery.sh@20 -- # nvmftestinit 00:09:00.102 05:11:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:00.102 05:11:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.102 05:11:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:00.102 05:11:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:00.102 05:11:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:00.102 05:11:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.102 05:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.102 05:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.102 05:11:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:00.102 05:11:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:00.102 05:11:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:00.102 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:09:06.680 05:11:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:06.680 05:11:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:06.680 05:11:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:06.680 05:11:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:06.680 05:11:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:06.680 05:11:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:06.680 05:11:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:06.680 05:11:22 -- nvmf/common.sh@294 -- # net_devs=() 00:09:06.680 05:11:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:06.680 05:11:22 -- nvmf/common.sh@295 -- # e810=() 00:09:06.680 05:11:22 -- nvmf/common.sh@295 -- # local -ga e810 00:09:06.680 05:11:22 -- nvmf/common.sh@296 -- # x722=() 00:09:06.680 05:11:22 -- nvmf/common.sh@296 -- # local -ga x722 00:09:06.680 05:11:22 -- nvmf/common.sh@297 -- # mlx=() 00:09:06.680 05:11:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:06.680 05:11:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.680 05:11:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:06.680 05:11:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:06.680 05:11:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:06.680 05:11:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:06.680 05:11:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:06.680 05:11:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:06.680 05:11:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:06.680 05:11:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:06.680 05:11:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:06.680 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:06.680 05:11:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:06.680 05:11:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:06.680 05:11:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.680 05:11:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.680 05:11:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:06.680 05:11:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.680 05:11:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:06.680 05:11:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:06.681 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:06.681 05:11:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.681 05:11:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:06.681 05:11:23 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.681 05:11:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:06.681 05:11:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.681 05:11:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:06.681 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.681 05:11:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.681 05:11:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:06.681 05:11:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.681 05:11:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:06.681 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.681 05:11:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:06.681 05:11:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:06.681 05:11:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:06.681 05:11:23 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:06.681 05:11:23 -- nvmf/common.sh@57 -- # uname 00:09:06.681 05:11:23 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:06.681 05:11:23 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:06.681 05:11:23 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:06.681 05:11:23 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:06.681 05:11:23 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:06.681 05:11:23 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:06.681 05:11:23 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:06.681 05:11:23 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:06.681 05:11:23 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:06.681 05:11:23 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:06.681 05:11:23 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:06.681 05:11:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.681 05:11:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:06.681 05:11:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:06.681 05:11:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.681 05:11:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:06.681 05:11:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@104 -- # continue 2 00:09:06.681 05:11:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@104 -- # continue 2 00:09:06.681 05:11:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:06.681 05:11:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:06.681 05:11:23 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:06.681 05:11:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:06.681 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.681 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:06.681 altname enp217s0f0np0 00:09:06.681 altname ens818f0np0 00:09:06.681 inet 192.168.100.8/24 scope global mlx_0_0 00:09:06.681 valid_lft forever preferred_lft forever 00:09:06.681 05:11:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:06.681 05:11:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:06.681 05:11:23 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:06.681 05:11:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:06.681 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.681 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:06.681 altname enp217s0f1np1 00:09:06.681 altname ens818f1np1 00:09:06.681 inet 192.168.100.9/24 scope global mlx_0_1 00:09:06.681 valid_lft forever preferred_lft forever 00:09:06.681 05:11:23 -- nvmf/common.sh@410 -- # return 0 00:09:06.681 05:11:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:06.681 05:11:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:06.681 05:11:23 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:06.681 05:11:23 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:06.681 05:11:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.681 05:11:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:06.681 05:11:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:06.681 05:11:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.681 05:11:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:06.681 05:11:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@104 -- # continue 2 00:09:06.681 05:11:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.681 05:11:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.681 05:11:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@104 -- # continue 2 00:09:06.681 05:11:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:06.681 05:11:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:06.681 05:11:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:06.681 05:11:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:06.681 05:11:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:06.681 05:11:23 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:06.681 192.168.100.9' 00:09:06.681 05:11:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:06.681 192.168.100.9' 00:09:06.681 05:11:23 -- nvmf/common.sh@445 -- # head -n 1 00:09:06.681 05:11:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:06.681 05:11:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:06.681 192.168.100.9' 00:09:06.681 05:11:23 -- nvmf/common.sh@446 -- # tail -n +2 00:09:06.681 05:11:23 -- nvmf/common.sh@446 -- # head -n 1 00:09:06.681 05:11:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:06.681 05:11:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:06.681 05:11:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:06.681 05:11:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:06.681 05:11:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:06.681 05:11:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:06.681 05:11:23 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:06.941 05:11:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:06.941 05:11:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.941 05:11:23 -- common/autotest_common.sh@10 -- # set +x 00:09:06.941 05:11:23 -- nvmf/common.sh@469 -- # nvmfpid=1683753 00:09:06.941 05:11:23 -- nvmf/common.sh@470 -- # waitforlisten 1683753 00:09:06.941 05:11:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.941 05:11:23 -- common/autotest_common.sh@829 -- # '[' -z 1683753 ']' 00:09:06.941 05:11:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.941 05:11:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.941 05:11:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.941 05:11:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.941 05:11:23 -- common/autotest_common.sh@10 -- # set +x 00:09:06.941 [2024-11-19 05:11:23.296586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:06.941 [2024-11-19 05:11:23.296636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.941 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.941 [2024-11-19 05:11:23.366998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.941 [2024-11-19 05:11:23.405095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:06.941 [2024-11-19 05:11:23.405219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.941 [2024-11-19 05:11:23.405229] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.941 [2024-11-19 05:11:23.405238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.941 [2024-11-19 05:11:23.405283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.941 [2024-11-19 05:11:23.405377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.941 [2024-11-19 05:11:23.405463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.941 [2024-11-19 05:11:23.405465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.880 05:11:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.880 05:11:24 -- common/autotest_common.sh@862 -- # return 0 00:09:07.880 05:11:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:07.880 05:11:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.880 05:11:24 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 [2024-11-19 05:11:24.191445] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1162200/0x11666f0) succeed. 00:09:07.880 [2024-11-19 05:11:24.200602] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11637f0/0x11a7d90) succeed. 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@26 -- # seq 1 4 00:09:07.880 05:11:24 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.880 05:11:24 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 Null1 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 [2024-11-19 05:11:24.362145] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.880 05:11:24 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 Null2 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.880 05:11:24 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 Null3 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.880 05:11:24 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.880 05:11:24 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:07.880 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.880 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.141 Null4 00:09:08.141 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.141 05:11:24 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:08.141 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.141 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.141 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.141 05:11:24 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:08.141 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.141 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.141 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.141 05:11:24 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:08.141 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.141 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.141 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.141 05:11:24 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:08.141 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.141 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.141 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.141 05:11:24 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:08.141 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.141 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.141 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.141 05:11:24 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:09:08.141 00:09:08.141 Discovery Log Number of Records 6, Generation counter 6 00:09:08.141 =====Discovery Log Entry 0====== 00:09:08.141 trtype: rdma 00:09:08.141 adrfam: ipv4 00:09:08.141 subtype: current discovery subsystem 00:09:08.141 treq: not required 00:09:08.141 portid: 0 00:09:08.141 trsvcid: 4420 00:09:08.141 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:08.141 traddr: 192.168.100.8 00:09:08.141 eflags: explicit discovery connections, duplicate discovery information 00:09:08.141 rdma_prtype: not specified 00:09:08.141 rdma_qptype: connected 00:09:08.141 rdma_cms: rdma-cm 00:09:08.141 rdma_pkey: 0x0000 00:09:08.141 =====Discovery Log Entry 1====== 00:09:08.141 trtype: rdma 00:09:08.141 adrfam: ipv4 00:09:08.141 subtype: nvme subsystem 00:09:08.141 treq: not required 00:09:08.141 portid: 0 00:09:08.141 trsvcid: 4420 00:09:08.141 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:08.141 traddr: 192.168.100.8 00:09:08.141 eflags: none 00:09:08.141 rdma_prtype: not specified 00:09:08.141 rdma_qptype: connected 00:09:08.141 rdma_cms: rdma-cm 00:09:08.141 rdma_pkey: 0x0000 00:09:08.141 =====Discovery Log Entry 2====== 00:09:08.141 trtype: rdma 00:09:08.141 adrfam: ipv4 00:09:08.141 subtype: nvme subsystem 00:09:08.141 treq: not required 00:09:08.141 portid: 0 00:09:08.141 trsvcid: 4420 00:09:08.141 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:08.141 traddr: 192.168.100.8 00:09:08.142 eflags: none 00:09:08.142 rdma_prtype: not specified 00:09:08.142 rdma_qptype: connected 00:09:08.142 rdma_cms: rdma-cm 00:09:08.142 rdma_pkey: 0x0000 00:09:08.142 =====Discovery Log Entry 3====== 00:09:08.142 trtype: rdma 00:09:08.142 adrfam: ipv4 00:09:08.142 subtype: nvme subsystem 00:09:08.142 treq: not required 00:09:08.142 portid: 0 00:09:08.142 trsvcid: 4420 00:09:08.142 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:08.142 traddr: 192.168.100.8 00:09:08.142 eflags: none 00:09:08.142 rdma_prtype: not specified 00:09:08.142 rdma_qptype: connected 00:09:08.142 rdma_cms: rdma-cm 00:09:08.142 rdma_pkey: 0x0000 00:09:08.142 =====Discovery Log Entry 4====== 00:09:08.142 trtype: rdma 00:09:08.142 adrfam: ipv4 00:09:08.142 subtype: nvme subsystem 00:09:08.142 treq: not required 00:09:08.142 portid: 0 00:09:08.142 trsvcid: 4420 00:09:08.142 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:08.142 traddr: 192.168.100.8 00:09:08.142 eflags: none 00:09:08.142 rdma_prtype: not specified 00:09:08.142 rdma_qptype: connected 00:09:08.142 rdma_cms: rdma-cm 00:09:08.142 rdma_pkey: 0x0000 00:09:08.142 =====Discovery Log Entry 5====== 00:09:08.142 trtype: rdma 00:09:08.142 adrfam: ipv4 00:09:08.142 subtype: discovery subsystem referral 00:09:08.142 treq: not required 00:09:08.142 portid: 0 00:09:08.142 trsvcid: 4430 00:09:08.142 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:08.142 traddr: 192.168.100.8 00:09:08.142 eflags: none 00:09:08.142 rdma_prtype: unrecognized 00:09:08.142 rdma_qptype: unrecognized 00:09:08.142 rdma_cms: unrecognized 00:09:08.142 rdma_pkey: 0x0000 00:09:08.142 05:11:24 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:08.142 Perform nvmf subsystem discovery via RPC 00:09:08.142 05:11:24 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 [2024-11-19 05:11:24.586632] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:08.142 [ 00:09:08.142 { 00:09:08.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:08.142 "subtype": "Discovery", 00:09:08.142 "listen_addresses": [ 00:09:08.142 { 00:09:08.142 "transport": "RDMA", 00:09:08.142 "trtype": "RDMA", 00:09:08.142 "adrfam": "IPv4", 00:09:08.142 "traddr": "192.168.100.8", 00:09:08.142 "trsvcid": "4420" 00:09:08.142 } 00:09:08.142 ], 00:09:08.142 "allow_any_host": true, 00:09:08.142 "hosts": [] 00:09:08.142 }, 00:09:08.142 { 00:09:08.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.142 "subtype": "NVMe", 00:09:08.142 "listen_addresses": [ 00:09:08.142 { 00:09:08.142 "transport": "RDMA", 00:09:08.142 "trtype": "RDMA", 00:09:08.142 "adrfam": "IPv4", 00:09:08.142 "traddr": "192.168.100.8", 00:09:08.142 "trsvcid": "4420" 00:09:08.142 } 00:09:08.142 ], 00:09:08.142 "allow_any_host": true, 00:09:08.142 "hosts": [], 00:09:08.142 "serial_number": "SPDK00000000000001", 00:09:08.142 "model_number": "SPDK bdev Controller", 00:09:08.142 "max_namespaces": 32, 00:09:08.142 "min_cntlid": 1, 00:09:08.142 "max_cntlid": 65519, 00:09:08.142 "namespaces": [ 00:09:08.142 { 00:09:08.142 "nsid": 1, 00:09:08.142 "bdev_name": "Null1", 00:09:08.142 "name": "Null1", 00:09:08.142 "nguid": "B72417B271B9479D8E70F0CB79296A45", 00:09:08.142 "uuid": "b72417b2-71b9-479d-8e70-f0cb79296a45" 00:09:08.142 } 00:09:08.142 ] 00:09:08.142 }, 00:09:08.142 { 00:09:08.142 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:08.142 "subtype": "NVMe", 00:09:08.142 "listen_addresses": [ 00:09:08.142 { 00:09:08.142 "transport": "RDMA", 00:09:08.142 "trtype": "RDMA", 00:09:08.142 "adrfam": "IPv4", 00:09:08.142 "traddr": "192.168.100.8", 00:09:08.142 "trsvcid": "4420" 00:09:08.142 } 00:09:08.142 ], 00:09:08.142 "allow_any_host": true, 00:09:08.142 "hosts": [], 00:09:08.142 "serial_number": "SPDK00000000000002", 00:09:08.142 "model_number": "SPDK bdev Controller", 00:09:08.142 "max_namespaces": 32, 00:09:08.142 "min_cntlid": 1, 00:09:08.142 "max_cntlid": 65519, 00:09:08.142 "namespaces": [ 00:09:08.142 { 00:09:08.142 "nsid": 1, 00:09:08.142 "bdev_name": "Null2", 00:09:08.142 "name": "Null2", 00:09:08.142 "nguid": "EBE5280274244ECC9134D6778037F826", 00:09:08.142 "uuid": "ebe52802-7424-4ecc-9134-d6778037f826" 00:09:08.142 } 00:09:08.142 ] 00:09:08.142 }, 00:09:08.142 { 00:09:08.142 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:08.142 "subtype": "NVMe", 00:09:08.142 "listen_addresses": [ 00:09:08.142 { 00:09:08.142 "transport": "RDMA", 00:09:08.142 "trtype": "RDMA", 00:09:08.142 "adrfam": "IPv4", 00:09:08.142 "traddr": "192.168.100.8", 00:09:08.142 "trsvcid": "4420" 00:09:08.142 } 00:09:08.142 ], 00:09:08.142 "allow_any_host": true, 00:09:08.142 "hosts": [], 00:09:08.142 "serial_number": "SPDK00000000000003", 00:09:08.142 "model_number": "SPDK bdev Controller", 00:09:08.142 "max_namespaces": 32, 00:09:08.142 "min_cntlid": 1, 00:09:08.142 "max_cntlid": 65519, 00:09:08.142 "namespaces": [ 00:09:08.142 { 00:09:08.142 "nsid": 1, 00:09:08.142 "bdev_name": "Null3", 00:09:08.142 "name": "Null3", 00:09:08.142 "nguid": "3F1CB5E625694BF0B151C8B5C5E42FE8", 00:09:08.142 "uuid": "3f1cb5e6-2569-4bf0-b151-c8b5c5e42fe8" 00:09:08.142 } 00:09:08.142 ] 00:09:08.142 }, 00:09:08.142 { 00:09:08.142 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:08.142 "subtype": "NVMe", 00:09:08.142 "listen_addresses": [ 00:09:08.142 { 00:09:08.142 "transport": "RDMA", 00:09:08.142 "trtype": "RDMA", 00:09:08.142 "adrfam": "IPv4", 00:09:08.142 "traddr": "192.168.100.8", 00:09:08.142 "trsvcid": "4420" 00:09:08.142 } 00:09:08.142 ], 00:09:08.142 "allow_any_host": true, 00:09:08.142 "hosts": [], 00:09:08.142 "serial_number": "SPDK00000000000004", 00:09:08.142 "model_number": "SPDK bdev Controller", 00:09:08.142 "max_namespaces": 32, 00:09:08.142 "min_cntlid": 1, 00:09:08.142 "max_cntlid": 65519, 00:09:08.142 "namespaces": [ 00:09:08.142 { 00:09:08.142 "nsid": 1, 00:09:08.142 "bdev_name": "Null4", 00:09:08.142 "name": "Null4", 00:09:08.142 "nguid": "29C5D17D7576419CA90257566118ED3A", 00:09:08.142 "uuid": "29c5d17d-7576-419c-a902-57566118ed3a" 00:09:08.142 } 00:09:08.142 ] 00:09:08.142 } 00:09:08.142 ] 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@42 -- # seq 1 4 00:09:08.142 05:11:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.142 05:11:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.142 05:11:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.142 05:11:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.142 05:11:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.142 05:11:24 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:08.142 05:11:24 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:08.142 05:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.142 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.401 05:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.401 05:11:24 -- target/discovery.sh@49 -- # check_bdevs= 00:09:08.401 05:11:24 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:08.401 05:11:24 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:08.401 05:11:24 -- target/discovery.sh@57 -- # nvmftestfini 00:09:08.401 05:11:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:08.401 05:11:24 -- nvmf/common.sh@116 -- # sync 00:09:08.401 05:11:24 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:08.401 05:11:24 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:08.401 05:11:24 -- nvmf/common.sh@119 -- # set +e 00:09:08.401 05:11:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:08.401 05:11:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:08.401 rmmod nvme_rdma 00:09:08.401 rmmod nvme_fabrics 00:09:08.401 05:11:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:08.401 05:11:24 -- nvmf/common.sh@123 -- # set -e 00:09:08.401 05:11:24 -- nvmf/common.sh@124 -- # return 0 00:09:08.401 05:11:24 -- nvmf/common.sh@477 -- # '[' -n 1683753 ']' 00:09:08.402 05:11:24 -- nvmf/common.sh@478 -- # killprocess 1683753 00:09:08.402 05:11:24 -- common/autotest_common.sh@936 -- # '[' -z 1683753 ']' 00:09:08.402 05:11:24 -- common/autotest_common.sh@940 -- # kill -0 1683753 00:09:08.402 05:11:24 -- common/autotest_common.sh@941 -- # uname 00:09:08.402 05:11:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:08.402 05:11:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1683753 00:09:08.402 05:11:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:08.402 05:11:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:08.402 05:11:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1683753' 00:09:08.402 killing process with pid 1683753 00:09:08.402 05:11:24 -- common/autotest_common.sh@955 -- # kill 1683753 00:09:08.402 [2024-11-19 05:11:24.848424] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:08.402 05:11:24 -- common/autotest_common.sh@960 -- # wait 1683753 00:09:08.661 05:11:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:08.661 05:11:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:08.661 00:09:08.661 real 0m8.826s 00:09:08.661 user 0m8.732s 00:09:08.661 sys 0m5.670s 00:09:08.661 05:11:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.661 05:11:25 -- common/autotest_common.sh@10 -- # set +x 00:09:08.661 ************************************ 00:09:08.661 END TEST nvmf_discovery 00:09:08.661 ************************************ 00:09:08.661 05:11:25 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:08.661 05:11:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:08.661 05:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.661 05:11:25 -- common/autotest_common.sh@10 -- # set +x 00:09:08.661 ************************************ 00:09:08.661 START TEST nvmf_referrals 00:09:08.661 ************************************ 00:09:08.661 05:11:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:08.661 * Looking for test storage... 00:09:08.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:08.921 05:11:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:08.921 05:11:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:08.921 05:11:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.921 05:11:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.921 05:11:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.921 05:11:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.921 05:11:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.921 05:11:25 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.921 05:11:25 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.921 05:11:25 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.921 05:11:25 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.921 05:11:25 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.921 05:11:25 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.921 05:11:25 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.921 05:11:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.921 05:11:25 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.921 05:11:25 -- scripts/common.sh@344 -- # : 1 00:09:08.921 05:11:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.921 05:11:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.921 05:11:25 -- scripts/common.sh@364 -- # decimal 1 00:09:08.921 05:11:25 -- scripts/common.sh@352 -- # local d=1 00:09:08.921 05:11:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.921 05:11:25 -- scripts/common.sh@354 -- # echo 1 00:09:08.921 05:11:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.921 05:11:25 -- scripts/common.sh@365 -- # decimal 2 00:09:08.921 05:11:25 -- scripts/common.sh@352 -- # local d=2 00:09:08.921 05:11:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.921 05:11:25 -- scripts/common.sh@354 -- # echo 2 00:09:08.921 05:11:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.921 05:11:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.921 05:11:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.921 05:11:25 -- scripts/common.sh@367 -- # return 0 00:09:08.921 05:11:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.921 05:11:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.921 --rc genhtml_branch_coverage=1 00:09:08.921 --rc genhtml_function_coverage=1 00:09:08.921 --rc genhtml_legend=1 00:09:08.921 --rc geninfo_all_blocks=1 00:09:08.921 --rc geninfo_unexecuted_blocks=1 00:09:08.921 00:09:08.921 ' 00:09:08.921 05:11:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.921 --rc genhtml_branch_coverage=1 00:09:08.921 --rc genhtml_function_coverage=1 00:09:08.921 --rc genhtml_legend=1 00:09:08.921 --rc geninfo_all_blocks=1 00:09:08.921 --rc geninfo_unexecuted_blocks=1 00:09:08.921 00:09:08.921 ' 00:09:08.921 05:11:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.921 --rc genhtml_branch_coverage=1 00:09:08.921 --rc genhtml_function_coverage=1 00:09:08.921 --rc genhtml_legend=1 00:09:08.921 --rc geninfo_all_blocks=1 00:09:08.921 --rc geninfo_unexecuted_blocks=1 00:09:08.921 00:09:08.921 ' 00:09:08.921 05:11:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.921 --rc genhtml_branch_coverage=1 00:09:08.921 --rc genhtml_function_coverage=1 00:09:08.922 --rc genhtml_legend=1 00:09:08.922 --rc geninfo_all_blocks=1 00:09:08.922 --rc geninfo_unexecuted_blocks=1 00:09:08.922 00:09:08.922 ' 00:09:08.922 05:11:25 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.922 05:11:25 -- nvmf/common.sh@7 -- # uname -s 00:09:08.922 05:11:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.922 05:11:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.922 05:11:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.922 05:11:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.922 05:11:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.922 05:11:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.922 05:11:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.922 05:11:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.922 05:11:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.922 05:11:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.922 05:11:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:08.922 05:11:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:08.922 05:11:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.922 05:11:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.922 05:11:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.922 05:11:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:08.922 05:11:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.922 05:11:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.922 05:11:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.922 05:11:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.922 05:11:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.922 05:11:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.922 05:11:25 -- paths/export.sh@5 -- # export PATH 00:09:08.922 05:11:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.922 05:11:25 -- nvmf/common.sh@46 -- # : 0 00:09:08.922 05:11:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:08.922 05:11:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:08.922 05:11:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:08.922 05:11:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.922 05:11:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.922 05:11:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:08.922 05:11:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:08.922 05:11:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:08.922 05:11:25 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:08.922 05:11:25 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:08.922 05:11:25 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:08.922 05:11:25 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:08.922 05:11:25 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:08.922 05:11:25 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:08.922 05:11:25 -- target/referrals.sh@37 -- # nvmftestinit 00:09:08.922 05:11:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:08.922 05:11:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.922 05:11:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:08.922 05:11:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:08.922 05:11:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:08.922 05:11:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.922 05:11:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.922 05:11:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.922 05:11:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:08.922 05:11:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:08.922 05:11:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:08.922 05:11:25 -- common/autotest_common.sh@10 -- # set +x 00:09:15.501 05:11:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:15.501 05:11:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:15.501 05:11:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:15.501 05:11:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:15.501 05:11:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:15.501 05:11:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:15.501 05:11:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:15.501 05:11:31 -- nvmf/common.sh@294 -- # net_devs=() 00:09:15.501 05:11:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:15.501 05:11:31 -- nvmf/common.sh@295 -- # e810=() 00:09:15.501 05:11:31 -- nvmf/common.sh@295 -- # local -ga e810 00:09:15.501 05:11:31 -- nvmf/common.sh@296 -- # x722=() 00:09:15.501 05:11:31 -- nvmf/common.sh@296 -- # local -ga x722 00:09:15.501 05:11:31 -- nvmf/common.sh@297 -- # mlx=() 00:09:15.501 05:11:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:15.501 05:11:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.501 05:11:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.501 05:11:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.501 05:11:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:15.501 05:11:32 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:15.501 05:11:32 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:15.501 05:11:32 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:15.501 05:11:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:15.501 05:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:15.501 05:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:15.501 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:15.501 05:11:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.501 05:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:15.501 05:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:15.501 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:15.501 05:11:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.501 05:11:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:15.501 05:11:32 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:15.501 05:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.501 05:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:15.501 05:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.501 05:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:15.501 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:15.501 05:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.501 05:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:15.501 05:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.501 05:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:15.501 05:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.501 05:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:15.501 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:15.501 05:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.501 05:11:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:15.501 05:11:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:15.501 05:11:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:15.501 05:11:32 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:15.501 05:11:32 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:15.501 05:11:32 -- nvmf/common.sh@57 -- # uname 00:09:15.501 05:11:32 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:15.501 05:11:32 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:15.501 05:11:32 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:15.501 05:11:32 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:15.501 05:11:32 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:15.501 05:11:32 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:15.501 05:11:32 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:15.761 05:11:32 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:15.761 05:11:32 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:15.761 05:11:32 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:15.761 05:11:32 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:15.761 05:11:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.761 05:11:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:15.761 05:11:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:15.761 05:11:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.761 05:11:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:15.761 05:11:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.761 05:11:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:15.761 05:11:32 -- nvmf/common.sh@104 -- # continue 2 00:09:15.761 05:11:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.761 05:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.761 05:11:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:15.761 05:11:32 -- nvmf/common.sh@104 -- # continue 2 00:09:15.761 05:11:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:15.761 05:11:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:15.761 05:11:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:15.761 05:11:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:15.761 05:11:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:15.761 05:11:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:15.761 05:11:32 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:15.761 05:11:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:15.761 05:11:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:15.761 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.761 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:15.761 altname enp217s0f0np0 00:09:15.761 altname ens818f0np0 00:09:15.761 inet 192.168.100.8/24 scope global mlx_0_0 00:09:15.761 valid_lft forever preferred_lft forever 00:09:15.761 05:11:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:15.761 05:11:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:15.761 05:11:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:15.761 05:11:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:15.761 05:11:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:15.761 05:11:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:15.761 05:11:32 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:15.761 05:11:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:15.761 05:11:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:15.761 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.761 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:15.761 altname enp217s0f1np1 00:09:15.761 altname ens818f1np1 00:09:15.761 inet 192.168.100.9/24 scope global mlx_0_1 00:09:15.761 valid_lft forever preferred_lft forever 00:09:15.761 05:11:32 -- nvmf/common.sh@410 -- # return 0 00:09:15.761 05:11:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:15.761 05:11:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:15.761 05:11:32 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:15.761 05:11:32 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:15.761 05:11:32 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:15.761 05:11:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.761 05:11:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:15.761 05:11:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:15.761 05:11:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.761 05:11:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:15.761 05:11:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.761 05:11:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:15.761 05:11:32 -- nvmf/common.sh@104 -- # continue 2 00:09:15.761 05:11:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.761 05:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.762 05:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.762 05:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.762 05:11:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:15.762 05:11:32 -- nvmf/common.sh@104 -- # continue 2 00:09:15.762 05:11:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:15.762 05:11:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:15.762 05:11:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:15.762 05:11:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:15.762 05:11:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:15.762 05:11:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:15.762 05:11:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:15.762 05:11:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:15.762 05:11:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:15.762 05:11:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:15.762 05:11:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:15.762 05:11:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:15.762 05:11:32 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:15.762 192.168.100.9' 00:09:15.762 05:11:32 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:15.762 192.168.100.9' 00:09:15.762 05:11:32 -- nvmf/common.sh@445 -- # head -n 1 00:09:15.762 05:11:32 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:15.762 05:11:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:15.762 192.168.100.9' 00:09:15.762 05:11:32 -- nvmf/common.sh@446 -- # tail -n +2 00:09:15.762 05:11:32 -- nvmf/common.sh@446 -- # head -n 1 00:09:15.762 05:11:32 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:15.762 05:11:32 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:15.762 05:11:32 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:15.762 05:11:32 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:15.762 05:11:32 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:15.762 05:11:32 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:15.762 05:11:32 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:15.762 05:11:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:15.762 05:11:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.762 05:11:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.762 05:11:32 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.762 05:11:32 -- nvmf/common.sh@469 -- # nvmfpid=1687477 00:09:15.762 05:11:32 -- nvmf/common.sh@470 -- # waitforlisten 1687477 00:09:15.762 05:11:32 -- common/autotest_common.sh@829 -- # '[' -z 1687477 ']' 00:09:15.762 05:11:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.762 05:11:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.762 05:11:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.762 05:11:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.762 05:11:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.762 [2024-11-19 05:11:32.282117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.762 [2024-11-19 05:11:32.282165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.762 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.021 [2024-11-19 05:11:32.354215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.021 [2024-11-19 05:11:32.392074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.021 [2024-11-19 05:11:32.392183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.021 [2024-11-19 05:11:32.392193] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.021 [2024-11-19 05:11:32.392201] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.021 [2024-11-19 05:11:32.392243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.021 [2024-11-19 05:11:32.392342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.021 [2024-11-19 05:11:32.392403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.021 [2024-11-19 05:11:32.392405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.590 05:11:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.590 05:11:33 -- common/autotest_common.sh@862 -- # return 0 00:09:16.590 05:11:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:16.590 05:11:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.590 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:16.850 05:11:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.850 05:11:33 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:16.850 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.850 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:16.850 [2024-11-19 05:11:33.189992] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2505200/0x25096f0) succeed. 00:09:16.851 [2024-11-19 05:11:33.199255] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x25067f0/0x254ad90) succeed. 00:09:16.851 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 05:11:33 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:16.851 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 [2024-11-19 05:11:33.321962] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:16.851 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 05:11:33 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:16.851 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 05:11:33 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:16.851 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 05:11:33 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:16.851 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 05:11:33 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.851 05:11:33 -- target/referrals.sh@48 -- # jq length 00:09:16.851 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 05:11:33 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:16.851 05:11:33 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:16.851 05:11:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:16.851 05:11:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.851 05:11:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:16.851 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 05:11:33 -- target/referrals.sh@21 -- # sort 00:09:16.851 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.111 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.111 05:11:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:17.111 05:11:33 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:17.111 05:11:33 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:17.111 05:11:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.111 05:11:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.111 05:11:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.111 05:11:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.111 05:11:33 -- target/referrals.sh@26 -- # sort 00:09:17.111 05:11:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:17.111 05:11:33 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:17.111 05:11:33 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:17.111 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.111 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.111 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.111 05:11:33 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:17.111 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.111 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.111 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.111 05:11:33 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:17.111 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.111 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.111 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.111 05:11:33 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.111 05:11:33 -- target/referrals.sh@56 -- # jq length 00:09:17.111 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.111 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.111 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.111 05:11:33 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:17.111 05:11:33 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:17.111 05:11:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.111 05:11:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.111 05:11:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.111 05:11:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.111 05:11:33 -- target/referrals.sh@26 -- # sort 00:09:17.371 05:11:33 -- target/referrals.sh@26 -- # echo 00:09:17.371 05:11:33 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:17.371 05:11:33 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:17.371 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.371 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.371 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.371 05:11:33 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:17.371 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.371 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.371 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.371 05:11:33 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:17.371 05:11:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:17.371 05:11:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.371 05:11:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:17.371 05:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.371 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:17.371 05:11:33 -- target/referrals.sh@21 -- # sort 00:09:17.371 05:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.371 05:11:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:17.371 05:11:33 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:17.371 05:11:33 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:17.371 05:11:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.371 05:11:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.371 05:11:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.371 05:11:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.371 05:11:33 -- target/referrals.sh@26 -- # sort 00:09:17.371 05:11:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:17.371 05:11:33 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:17.371 05:11:33 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:17.371 05:11:33 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:17.371 05:11:33 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:17.371 05:11:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.371 05:11:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:17.631 05:11:33 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:17.631 05:11:33 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:17.631 05:11:33 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:17.631 05:11:33 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:17.631 05:11:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.631 05:11:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:17.631 05:11:34 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:17.631 05:11:34 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:17.631 05:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.631 05:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:17.631 05:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.631 05:11:34 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:17.631 05:11:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:17.631 05:11:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.631 05:11:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:17.631 05:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.631 05:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:17.631 05:11:34 -- target/referrals.sh@21 -- # sort 00:09:17.631 05:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.631 05:11:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:17.631 05:11:34 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:17.631 05:11:34 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:17.631 05:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.631 05:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.631 05:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.631 05:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.631 05:11:34 -- target/referrals.sh@26 -- # sort 00:09:17.891 05:11:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:17.891 05:11:34 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:17.891 05:11:34 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:17.891 05:11:34 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:17.891 05:11:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:17.891 05:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.891 05:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:17.891 05:11:34 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:17.891 05:11:34 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:17.891 05:11:34 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:17.891 05:11:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:17.891 05:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:17.891 05:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:18.150 05:11:34 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:18.151 05:11:34 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:18.151 05:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.151 05:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:18.151 05:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.151 05:11:34 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.151 05:11:34 -- target/referrals.sh@82 -- # jq length 00:09:18.151 05:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.151 05:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:18.151 05:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.151 05:11:34 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:18.151 05:11:34 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:18.151 05:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.151 05:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.151 05:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:18.151 05:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.151 05:11:34 -- target/referrals.sh@26 -- # sort 00:09:18.151 05:11:34 -- target/referrals.sh@26 -- # echo 00:09:18.151 05:11:34 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:18.151 05:11:34 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:18.151 05:11:34 -- target/referrals.sh@86 -- # nvmftestfini 00:09:18.151 05:11:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:18.151 05:11:34 -- nvmf/common.sh@116 -- # sync 00:09:18.151 05:11:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:18.151 05:11:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:18.151 05:11:34 -- nvmf/common.sh@119 -- # set +e 00:09:18.151 05:11:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:18.151 05:11:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:18.151 rmmod nvme_rdma 00:09:18.151 rmmod nvme_fabrics 00:09:18.151 05:11:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:18.151 05:11:34 -- nvmf/common.sh@123 -- # set -e 00:09:18.151 05:11:34 -- nvmf/common.sh@124 -- # return 0 00:09:18.151 05:11:34 -- nvmf/common.sh@477 -- # '[' -n 1687477 ']' 00:09:18.151 05:11:34 -- nvmf/common.sh@478 -- # killprocess 1687477 00:09:18.151 05:11:34 -- common/autotest_common.sh@936 -- # '[' -z 1687477 ']' 00:09:18.151 05:11:34 -- common/autotest_common.sh@940 -- # kill -0 1687477 00:09:18.151 05:11:34 -- common/autotest_common.sh@941 -- # uname 00:09:18.151 05:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:18.151 05:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1687477 00:09:18.410 05:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:18.410 05:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:18.410 05:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1687477' 00:09:18.410 killing process with pid 1687477 00:09:18.410 05:11:34 -- common/autotest_common.sh@955 -- # kill 1687477 00:09:18.410 05:11:34 -- common/autotest_common.sh@960 -- # wait 1687477 00:09:18.670 05:11:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:18.670 05:11:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:18.670 00:09:18.670 real 0m9.869s 00:09:18.670 user 0m13.118s 00:09:18.670 sys 0m6.169s 00:09:18.670 05:11:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.670 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:09:18.670 ************************************ 00:09:18.670 END TEST nvmf_referrals 00:09:18.670 ************************************ 00:09:18.670 05:11:35 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:18.670 05:11:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:18.670 05:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.670 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:09:18.670 ************************************ 00:09:18.670 START TEST nvmf_connect_disconnect 00:09:18.670 ************************************ 00:09:18.670 05:11:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:18.670 * Looking for test storage... 00:09:18.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:18.670 05:11:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:18.670 05:11:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:18.670 05:11:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:18.670 05:11:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:18.670 05:11:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:18.670 05:11:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:18.670 05:11:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:18.670 05:11:35 -- scripts/common.sh@335 -- # IFS=.-: 00:09:18.670 05:11:35 -- scripts/common.sh@335 -- # read -ra ver1 00:09:18.670 05:11:35 -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.670 05:11:35 -- scripts/common.sh@336 -- # read -ra ver2 00:09:18.670 05:11:35 -- scripts/common.sh@337 -- # local 'op=<' 00:09:18.670 05:11:35 -- scripts/common.sh@339 -- # ver1_l=2 00:09:18.670 05:11:35 -- scripts/common.sh@340 -- # ver2_l=1 00:09:18.670 05:11:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:18.670 05:11:35 -- scripts/common.sh@343 -- # case "$op" in 00:09:18.670 05:11:35 -- scripts/common.sh@344 -- # : 1 00:09:18.670 05:11:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:18.670 05:11:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.670 05:11:35 -- scripts/common.sh@364 -- # decimal 1 00:09:18.670 05:11:35 -- scripts/common.sh@352 -- # local d=1 00:09:18.670 05:11:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.670 05:11:35 -- scripts/common.sh@354 -- # echo 1 00:09:18.670 05:11:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:18.670 05:11:35 -- scripts/common.sh@365 -- # decimal 2 00:09:18.670 05:11:35 -- scripts/common.sh@352 -- # local d=2 00:09:18.670 05:11:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.670 05:11:35 -- scripts/common.sh@354 -- # echo 2 00:09:18.670 05:11:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:18.930 05:11:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:18.930 05:11:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:18.930 05:11:35 -- scripts/common.sh@367 -- # return 0 00:09:18.930 05:11:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.930 05:11:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:18.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.930 --rc genhtml_branch_coverage=1 00:09:18.930 --rc genhtml_function_coverage=1 00:09:18.930 --rc genhtml_legend=1 00:09:18.930 --rc geninfo_all_blocks=1 00:09:18.930 --rc geninfo_unexecuted_blocks=1 00:09:18.930 00:09:18.930 ' 00:09:18.930 05:11:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:18.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.930 --rc genhtml_branch_coverage=1 00:09:18.930 --rc genhtml_function_coverage=1 00:09:18.930 --rc genhtml_legend=1 00:09:18.930 --rc geninfo_all_blocks=1 00:09:18.930 --rc geninfo_unexecuted_blocks=1 00:09:18.930 00:09:18.930 ' 00:09:18.930 05:11:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:18.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.930 --rc genhtml_branch_coverage=1 00:09:18.930 --rc genhtml_function_coverage=1 00:09:18.930 --rc genhtml_legend=1 00:09:18.930 --rc geninfo_all_blocks=1 00:09:18.930 --rc geninfo_unexecuted_blocks=1 00:09:18.930 00:09:18.930 ' 00:09:18.930 05:11:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:18.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.930 --rc genhtml_branch_coverage=1 00:09:18.930 --rc genhtml_function_coverage=1 00:09:18.930 --rc genhtml_legend=1 00:09:18.930 --rc geninfo_all_blocks=1 00:09:18.930 --rc geninfo_unexecuted_blocks=1 00:09:18.930 00:09:18.930 ' 00:09:18.930 05:11:35 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.930 05:11:35 -- nvmf/common.sh@7 -- # uname -s 00:09:18.930 05:11:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.930 05:11:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.930 05:11:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.930 05:11:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.930 05:11:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.930 05:11:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.930 05:11:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.930 05:11:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.930 05:11:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.930 05:11:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.930 05:11:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:18.930 05:11:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:18.930 05:11:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.930 05:11:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.930 05:11:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.930 05:11:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:18.930 05:11:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.930 05:11:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.930 05:11:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.930 05:11:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.930 05:11:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.930 05:11:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.930 05:11:35 -- paths/export.sh@5 -- # export PATH 00:09:18.930 05:11:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.930 05:11:35 -- nvmf/common.sh@46 -- # : 0 00:09:18.930 05:11:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:18.930 05:11:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:18.930 05:11:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:18.930 05:11:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.930 05:11:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.930 05:11:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:18.930 05:11:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:18.930 05:11:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:18.930 05:11:35 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.930 05:11:35 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.930 05:11:35 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:18.930 05:11:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:18.930 05:11:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.930 05:11:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:18.930 05:11:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:18.930 05:11:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:18.930 05:11:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.930 05:11:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.930 05:11:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.930 05:11:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:18.930 05:11:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:18.930 05:11:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:18.930 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:09:25.504 05:11:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:25.504 05:11:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:25.504 05:11:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:25.504 05:11:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:25.504 05:11:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:25.504 05:11:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:25.504 05:11:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:25.504 05:11:41 -- nvmf/common.sh@294 -- # net_devs=() 00:09:25.504 05:11:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:25.504 05:11:41 -- nvmf/common.sh@295 -- # e810=() 00:09:25.504 05:11:41 -- nvmf/common.sh@295 -- # local -ga e810 00:09:25.504 05:11:41 -- nvmf/common.sh@296 -- # x722=() 00:09:25.504 05:11:41 -- nvmf/common.sh@296 -- # local -ga x722 00:09:25.504 05:11:41 -- nvmf/common.sh@297 -- # mlx=() 00:09:25.504 05:11:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:25.504 05:11:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.504 05:11:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:25.504 05:11:41 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:25.504 05:11:41 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:25.504 05:11:41 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:25.504 05:11:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:25.504 05:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:25.504 05:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:25.504 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:25.504 05:11:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.504 05:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:25.504 05:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:25.504 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:25.504 05:11:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.504 05:11:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:25.504 05:11:41 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:25.504 05:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:25.504 05:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.505 05:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:25.505 05:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.505 05:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:25.505 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.505 05:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.505 05:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:25.505 05:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.505 05:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:25.505 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.505 05:11:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:25.505 05:11:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:25.505 05:11:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:25.505 05:11:41 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:25.505 05:11:41 -- nvmf/common.sh@57 -- # uname 00:09:25.505 05:11:41 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:25.505 05:11:41 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:25.505 05:11:41 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:25.505 05:11:41 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:25.505 05:11:41 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:25.505 05:11:41 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:25.505 05:11:41 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:25.505 05:11:41 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:25.505 05:11:41 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:25.505 05:11:41 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:25.505 05:11:41 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:25.505 05:11:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.505 05:11:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:25.505 05:11:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:25.505 05:11:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.505 05:11:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:25.505 05:11:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@104 -- # continue 2 00:09:25.505 05:11:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@104 -- # continue 2 00:09:25.505 05:11:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:25.505 05:11:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.505 05:11:41 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:25.505 05:11:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:25.505 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.505 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:25.505 altname enp217s0f0np0 00:09:25.505 altname ens818f0np0 00:09:25.505 inet 192.168.100.8/24 scope global mlx_0_0 00:09:25.505 valid_lft forever preferred_lft forever 00:09:25.505 05:11:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:25.505 05:11:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.505 05:11:41 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:25.505 05:11:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:25.505 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.505 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:25.505 altname enp217s0f1np1 00:09:25.505 altname ens818f1np1 00:09:25.505 inet 192.168.100.9/24 scope global mlx_0_1 00:09:25.505 valid_lft forever preferred_lft forever 00:09:25.505 05:11:41 -- nvmf/common.sh@410 -- # return 0 00:09:25.505 05:11:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:25.505 05:11:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:25.505 05:11:41 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:25.505 05:11:41 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:25.505 05:11:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.505 05:11:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:25.505 05:11:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:25.505 05:11:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.505 05:11:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:25.505 05:11:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@104 -- # continue 2 00:09:25.505 05:11:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.505 05:11:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.505 05:11:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@104 -- # continue 2 00:09:25.505 05:11:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:25.505 05:11:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.505 05:11:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:25.505 05:11:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.505 05:11:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.505 05:11:41 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:25.505 192.168.100.9' 00:09:25.505 05:11:41 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:25.505 192.168.100.9' 00:09:25.505 05:11:41 -- nvmf/common.sh@445 -- # head -n 1 00:09:25.505 05:11:41 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:25.505 05:11:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:25.505 192.168.100.9' 00:09:25.505 05:11:41 -- nvmf/common.sh@446 -- # tail -n +2 00:09:25.505 05:11:41 -- nvmf/common.sh@446 -- # head -n 1 00:09:25.505 05:11:41 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:25.505 05:11:41 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:25.505 05:11:41 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:25.505 05:11:41 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:25.505 05:11:41 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:25.505 05:11:41 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:25.505 05:11:41 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:25.505 05:11:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:25.505 05:11:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.505 05:11:41 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 05:11:41 -- nvmf/common.sh@469 -- # nvmfpid=1691402 00:09:25.505 05:11:41 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.505 05:11:41 -- nvmf/common.sh@470 -- # waitforlisten 1691402 00:09:25.505 05:11:41 -- common/autotest_common.sh@829 -- # '[' -z 1691402 ']' 00:09:25.505 05:11:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.505 05:11:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.505 05:11:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.505 05:11:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.505 05:11:41 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 [2024-11-19 05:11:41.978936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:25.505 [2024-11-19 05:11:41.978984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.505 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.505 [2024-11-19 05:11:42.049476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.765 [2024-11-19 05:11:42.087291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.765 [2024-11-19 05:11:42.087400] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.765 [2024-11-19 05:11:42.087410] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.765 [2024-11-19 05:11:42.087418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.765 [2024-11-19 05:11:42.087460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.765 [2024-11-19 05:11:42.087565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.765 [2024-11-19 05:11:42.087606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.765 [2024-11-19 05:11:42.087608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.336 05:11:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.336 05:11:42 -- common/autotest_common.sh@862 -- # return 0 00:09:26.336 05:11:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:26.336 05:11:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.336 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:09:26.336 05:11:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.336 05:11:42 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:26.336 05:11:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.336 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:09:26.336 [2024-11-19 05:11:42.850968] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:26.336 [2024-11-19 05:11:42.871683] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x170d200/0x17116f0) succeed. 00:09:26.336 [2024-11-19 05:11:42.881036] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x170e7f0/0x1752d90) succeed. 00:09:26.596 05:11:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.596 05:11:42 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:26.596 05:11:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.596 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:09:26.596 05:11:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.596 05:11:42 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:26.596 05:11:42 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:26.596 05:11:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.596 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:09:26.596 05:11:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.596 05:11:43 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.596 05:11:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.596 05:11:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.596 05:11:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.596 05:11:43 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:26.596 05:11:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.596 05:11:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.596 [2024-11-19 05:11:43.020702] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:26.596 05:11:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.596 05:11:43 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:26.596 05:11:43 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:26.596 05:11:43 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:26.596 05:11:43 -- target/connect_disconnect.sh@34 -- # set +x 00:09:29.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.159 05:16:58 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:42.159 05:16:58 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:42.159 05:16:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:42.159 05:16:58 -- nvmf/common.sh@116 -- # sync 00:14:42.159 05:16:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:42.159 05:16:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:42.159 05:16:58 -- nvmf/common.sh@119 -- # set +e 00:14:42.159 05:16:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:42.159 05:16:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:42.159 rmmod nvme_rdma 00:14:42.159 rmmod nvme_fabrics 00:14:42.159 05:16:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:42.159 05:16:58 -- nvmf/common.sh@123 -- # set -e 00:14:42.159 05:16:58 -- nvmf/common.sh@124 -- # return 0 00:14:42.159 05:16:58 -- nvmf/common.sh@477 -- # '[' -n 1691402 ']' 00:14:42.159 05:16:58 -- nvmf/common.sh@478 -- # killprocess 1691402 00:14:42.159 05:16:58 -- common/autotest_common.sh@936 -- # '[' -z 1691402 ']' 00:14:42.159 05:16:58 -- common/autotest_common.sh@940 -- # kill -0 1691402 00:14:42.159 05:16:58 -- common/autotest_common.sh@941 -- # uname 00:14:42.159 05:16:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:42.159 05:16:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1691402 00:14:42.159 05:16:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:42.159 05:16:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:42.159 05:16:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1691402' 00:14:42.159 killing process with pid 1691402 00:14:42.159 05:16:58 -- common/autotest_common.sh@955 -- # kill 1691402 00:14:42.159 05:16:58 -- common/autotest_common.sh@960 -- # wait 1691402 00:14:42.419 05:16:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:42.419 05:16:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:42.419 00:14:42.419 real 5m23.836s 00:14:42.419 user 21m4.510s 00:14:42.419 sys 0m17.807s 00:14:42.419 05:16:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:42.419 05:16:58 -- common/autotest_common.sh@10 -- # set +x 00:14:42.419 ************************************ 00:14:42.419 END TEST nvmf_connect_disconnect 00:14:42.419 ************************************ 00:14:42.419 05:16:58 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:42.419 05:16:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:42.419 05:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:42.419 05:16:58 -- common/autotest_common.sh@10 -- # set +x 00:14:42.419 ************************************ 00:14:42.419 START TEST nvmf_multitarget 00:14:42.419 ************************************ 00:14:42.419 05:16:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:42.679 * Looking for test storage... 00:14:42.679 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:42.679 05:16:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:42.679 05:16:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:42.679 05:16:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:42.679 05:16:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:42.679 05:16:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:42.679 05:16:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:42.679 05:16:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:42.679 05:16:59 -- scripts/common.sh@335 -- # IFS=.-: 00:14:42.679 05:16:59 -- scripts/common.sh@335 -- # read -ra ver1 00:14:42.679 05:16:59 -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.679 05:16:59 -- scripts/common.sh@336 -- # read -ra ver2 00:14:42.679 05:16:59 -- scripts/common.sh@337 -- # local 'op=<' 00:14:42.679 05:16:59 -- scripts/common.sh@339 -- # ver1_l=2 00:14:42.679 05:16:59 -- scripts/common.sh@340 -- # ver2_l=1 00:14:42.679 05:16:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:42.679 05:16:59 -- scripts/common.sh@343 -- # case "$op" in 00:14:42.679 05:16:59 -- scripts/common.sh@344 -- # : 1 00:14:42.679 05:16:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:42.680 05:16:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.680 05:16:59 -- scripts/common.sh@364 -- # decimal 1 00:14:42.680 05:16:59 -- scripts/common.sh@352 -- # local d=1 00:14:42.680 05:16:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.680 05:16:59 -- scripts/common.sh@354 -- # echo 1 00:14:42.680 05:16:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:42.680 05:16:59 -- scripts/common.sh@365 -- # decimal 2 00:14:42.680 05:16:59 -- scripts/common.sh@352 -- # local d=2 00:14:42.680 05:16:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.680 05:16:59 -- scripts/common.sh@354 -- # echo 2 00:14:42.680 05:16:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:42.680 05:16:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:42.680 05:16:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:42.680 05:16:59 -- scripts/common.sh@367 -- # return 0 00:14:42.680 05:16:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.680 05:16:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:42.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.680 --rc genhtml_branch_coverage=1 00:14:42.680 --rc genhtml_function_coverage=1 00:14:42.680 --rc genhtml_legend=1 00:14:42.680 --rc geninfo_all_blocks=1 00:14:42.680 --rc geninfo_unexecuted_blocks=1 00:14:42.680 00:14:42.680 ' 00:14:42.680 05:16:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:42.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.680 --rc genhtml_branch_coverage=1 00:14:42.680 --rc genhtml_function_coverage=1 00:14:42.680 --rc genhtml_legend=1 00:14:42.680 --rc geninfo_all_blocks=1 00:14:42.680 --rc geninfo_unexecuted_blocks=1 00:14:42.680 00:14:42.680 ' 00:14:42.680 05:16:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:42.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.680 --rc genhtml_branch_coverage=1 00:14:42.680 --rc genhtml_function_coverage=1 00:14:42.680 --rc genhtml_legend=1 00:14:42.680 --rc geninfo_all_blocks=1 00:14:42.680 --rc geninfo_unexecuted_blocks=1 00:14:42.680 00:14:42.680 ' 00:14:42.680 05:16:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:42.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.680 --rc genhtml_branch_coverage=1 00:14:42.680 --rc genhtml_function_coverage=1 00:14:42.680 --rc genhtml_legend=1 00:14:42.680 --rc geninfo_all_blocks=1 00:14:42.680 --rc geninfo_unexecuted_blocks=1 00:14:42.680 00:14:42.680 ' 00:14:42.680 05:16:59 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.680 05:16:59 -- nvmf/common.sh@7 -- # uname -s 00:14:42.680 05:16:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.680 05:16:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.680 05:16:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.680 05:16:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.680 05:16:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.680 05:16:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.680 05:16:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.680 05:16:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.680 05:16:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.680 05:16:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.680 05:16:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:42.680 05:16:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:42.680 05:16:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.680 05:16:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.680 05:16:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.680 05:16:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:42.680 05:16:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.680 05:16:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.680 05:16:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.680 05:16:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.680 05:16:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.680 05:16:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.680 05:16:59 -- paths/export.sh@5 -- # export PATH 00:14:42.680 05:16:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.680 05:16:59 -- nvmf/common.sh@46 -- # : 0 00:14:42.680 05:16:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:42.680 05:16:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:42.680 05:16:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:42.680 05:16:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.680 05:16:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.680 05:16:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:42.680 05:16:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:42.680 05:16:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:42.680 05:16:59 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:42.680 05:16:59 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:42.680 05:16:59 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:42.680 05:16:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.680 05:16:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:42.680 05:16:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:42.680 05:16:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:42.680 05:16:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.680 05:16:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.680 05:16:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.680 05:16:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:42.680 05:16:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:42.680 05:16:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:42.680 05:16:59 -- common/autotest_common.sh@10 -- # set +x 00:14:49.257 05:17:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:49.257 05:17:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:49.257 05:17:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:49.257 05:17:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:49.257 05:17:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:49.257 05:17:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:49.257 05:17:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:49.257 05:17:05 -- nvmf/common.sh@294 -- # net_devs=() 00:14:49.258 05:17:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:49.258 05:17:05 -- nvmf/common.sh@295 -- # e810=() 00:14:49.258 05:17:05 -- nvmf/common.sh@295 -- # local -ga e810 00:14:49.258 05:17:05 -- nvmf/common.sh@296 -- # x722=() 00:14:49.258 05:17:05 -- nvmf/common.sh@296 -- # local -ga x722 00:14:49.258 05:17:05 -- nvmf/common.sh@297 -- # mlx=() 00:14:49.258 05:17:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:49.258 05:17:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.258 05:17:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:49.258 05:17:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:49.258 05:17:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:49.258 05:17:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:49.258 05:17:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:49.258 05:17:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:49.258 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:49.258 05:17:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:49.258 05:17:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:49.258 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:49.258 05:17:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:49.258 05:17:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:49.258 05:17:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.258 05:17:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.258 05:17:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.258 05:17:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:49.258 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:49.258 05:17:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.258 05:17:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.258 05:17:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.258 05:17:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.258 05:17:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:49.258 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:49.258 05:17:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.258 05:17:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:49.258 05:17:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:49.258 05:17:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:49.258 05:17:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:49.258 05:17:05 -- nvmf/common.sh@57 -- # uname 00:14:49.258 05:17:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:49.258 05:17:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:49.258 05:17:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:49.258 05:17:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:49.258 05:17:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:49.258 05:17:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:49.258 05:17:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:49.258 05:17:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:49.258 05:17:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:49.258 05:17:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:49.258 05:17:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:49.258 05:17:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:49.258 05:17:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:49.258 05:17:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:49.258 05:17:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:49.258 05:17:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:49.258 05:17:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:49.258 05:17:05 -- nvmf/common.sh@104 -- # continue 2 00:14:49.258 05:17:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:49.258 05:17:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:49.258 05:17:05 -- nvmf/common.sh@104 -- # continue 2 00:14:49.258 05:17:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:49.258 05:17:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:49.258 05:17:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:49.258 05:17:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:49.258 05:17:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:49.258 05:17:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:49.258 05:17:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:49.258 05:17:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:49.258 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:49.258 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:49.258 altname enp217s0f0np0 00:14:49.258 altname ens818f0np0 00:14:49.258 inet 192.168.100.8/24 scope global mlx_0_0 00:14:49.258 valid_lft forever preferred_lft forever 00:14:49.258 05:17:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:49.258 05:17:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:49.258 05:17:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:49.258 05:17:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:49.258 05:17:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:49.258 05:17:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:49.258 05:17:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:49.258 05:17:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:49.258 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:49.258 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:49.258 altname enp217s0f1np1 00:14:49.258 altname ens818f1np1 00:14:49.258 inet 192.168.100.9/24 scope global mlx_0_1 00:14:49.258 valid_lft forever preferred_lft forever 00:14:49.258 05:17:05 -- nvmf/common.sh@410 -- # return 0 00:14:49.258 05:17:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:49.258 05:17:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:49.258 05:17:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:49.258 05:17:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:49.258 05:17:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:49.258 05:17:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:49.259 05:17:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:49.259 05:17:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:49.259 05:17:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:49.259 05:17:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:49.259 05:17:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:49.259 05:17:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:49.259 05:17:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:49.259 05:17:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:49.259 05:17:05 -- nvmf/common.sh@104 -- # continue 2 00:14:49.259 05:17:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:49.259 05:17:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:49.259 05:17:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:49.259 05:17:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:49.259 05:17:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:49.259 05:17:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:49.259 05:17:05 -- nvmf/common.sh@104 -- # continue 2 00:14:49.259 05:17:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:49.259 05:17:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:49.259 05:17:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:49.259 05:17:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:49.259 05:17:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:49.259 05:17:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:49.259 05:17:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:49.519 05:17:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:49.519 05:17:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:49.519 05:17:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:49.519 05:17:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:49.519 05:17:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:49.519 05:17:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:49.519 192.168.100.9' 00:14:49.519 05:17:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:49.519 192.168.100.9' 00:14:49.519 05:17:05 -- nvmf/common.sh@445 -- # head -n 1 00:14:49.519 05:17:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:49.519 05:17:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:49.519 192.168.100.9' 00:14:49.519 05:17:05 -- nvmf/common.sh@446 -- # tail -n +2 00:14:49.519 05:17:05 -- nvmf/common.sh@446 -- # head -n 1 00:14:49.519 05:17:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:49.519 05:17:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:49.519 05:17:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:49.519 05:17:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:49.519 05:17:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:49.519 05:17:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:49.519 05:17:05 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:49.519 05:17:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:49.519 05:17:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.519 05:17:05 -- common/autotest_common.sh@10 -- # set +x 00:14:49.519 05:17:05 -- nvmf/common.sh@469 -- # nvmfpid=1752095 00:14:49.519 05:17:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.519 05:17:05 -- nvmf/common.sh@470 -- # waitforlisten 1752095 00:14:49.519 05:17:05 -- common/autotest_common.sh@829 -- # '[' -z 1752095 ']' 00:14:49.519 05:17:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.520 05:17:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.520 05:17:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.520 05:17:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.520 05:17:05 -- common/autotest_common.sh@10 -- # set +x 00:14:49.520 [2024-11-19 05:17:05.931579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:49.520 [2024-11-19 05:17:05.931629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.520 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.520 [2024-11-19 05:17:06.001965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.520 [2024-11-19 05:17:06.038909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.520 [2024-11-19 05:17:06.039021] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.520 [2024-11-19 05:17:06.039030] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.520 [2024-11-19 05:17:06.039038] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.520 [2024-11-19 05:17:06.039128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.520 [2024-11-19 05:17:06.039222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.520 [2024-11-19 05:17:06.039308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.520 [2024-11-19 05:17:06.039309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.458 05:17:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.458 05:17:06 -- common/autotest_common.sh@862 -- # return 0 00:14:50.458 05:17:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:50.459 05:17:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.459 05:17:06 -- common/autotest_common.sh@10 -- # set +x 00:14:50.459 05:17:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.459 05:17:06 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:50.459 05:17:06 -- target/multitarget.sh@21 -- # jq length 00:14:50.459 05:17:06 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:50.459 05:17:06 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:50.459 05:17:06 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:50.459 "nvmf_tgt_1" 00:14:50.459 05:17:07 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:50.718 "nvmf_tgt_2" 00:14:50.718 05:17:07 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:50.718 05:17:07 -- target/multitarget.sh@28 -- # jq length 00:14:50.718 05:17:07 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:50.718 05:17:07 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:50.978 true 00:14:50.978 05:17:07 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:50.978 true 00:14:50.978 05:17:07 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:50.978 05:17:07 -- target/multitarget.sh@35 -- # jq length 00:14:50.978 05:17:07 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:50.978 05:17:07 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:50.978 05:17:07 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:50.978 05:17:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:50.978 05:17:07 -- nvmf/common.sh@116 -- # sync 00:14:50.978 05:17:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:50.978 05:17:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:50.978 05:17:07 -- nvmf/common.sh@119 -- # set +e 00:14:51.238 05:17:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.238 05:17:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:51.238 rmmod nvme_rdma 00:14:51.238 rmmod nvme_fabrics 00:14:51.238 05:17:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.238 05:17:07 -- nvmf/common.sh@123 -- # set -e 00:14:51.238 05:17:07 -- nvmf/common.sh@124 -- # return 0 00:14:51.238 05:17:07 -- nvmf/common.sh@477 -- # '[' -n 1752095 ']' 00:14:51.238 05:17:07 -- nvmf/common.sh@478 -- # killprocess 1752095 00:14:51.238 05:17:07 -- common/autotest_common.sh@936 -- # '[' -z 1752095 ']' 00:14:51.238 05:17:07 -- common/autotest_common.sh@940 -- # kill -0 1752095 00:14:51.238 05:17:07 -- common/autotest_common.sh@941 -- # uname 00:14:51.238 05:17:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.238 05:17:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1752095 00:14:51.238 05:17:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.238 05:17:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.238 05:17:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1752095' 00:14:51.238 killing process with pid 1752095 00:14:51.238 05:17:07 -- common/autotest_common.sh@955 -- # kill 1752095 00:14:51.238 05:17:07 -- common/autotest_common.sh@960 -- # wait 1752095 00:14:51.497 05:17:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.497 05:17:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:51.497 00:14:51.497 real 0m8.865s 00:14:51.497 user 0m9.787s 00:14:51.497 sys 0m5.666s 00:14:51.497 05:17:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.497 05:17:07 -- common/autotest_common.sh@10 -- # set +x 00:14:51.497 ************************************ 00:14:51.497 END TEST nvmf_multitarget 00:14:51.497 ************************************ 00:14:51.497 05:17:07 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:51.497 05:17:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.497 05:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.497 05:17:07 -- common/autotest_common.sh@10 -- # set +x 00:14:51.497 ************************************ 00:14:51.497 START TEST nvmf_rpc 00:14:51.497 ************************************ 00:14:51.498 05:17:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:51.498 * Looking for test storage... 00:14:51.498 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:51.498 05:17:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:51.498 05:17:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:51.498 05:17:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:51.498 05:17:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:51.498 05:17:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:51.498 05:17:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:51.498 05:17:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:51.498 05:17:08 -- scripts/common.sh@335 -- # IFS=.-: 00:14:51.498 05:17:08 -- scripts/common.sh@335 -- # read -ra ver1 00:14:51.498 05:17:08 -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.498 05:17:08 -- scripts/common.sh@336 -- # read -ra ver2 00:14:51.498 05:17:08 -- scripts/common.sh@337 -- # local 'op=<' 00:14:51.498 05:17:08 -- scripts/common.sh@339 -- # ver1_l=2 00:14:51.498 05:17:08 -- scripts/common.sh@340 -- # ver2_l=1 00:14:51.498 05:17:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:51.498 05:17:08 -- scripts/common.sh@343 -- # case "$op" in 00:14:51.498 05:17:08 -- scripts/common.sh@344 -- # : 1 00:14:51.498 05:17:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:51.498 05:17:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.498 05:17:08 -- scripts/common.sh@364 -- # decimal 1 00:14:51.498 05:17:08 -- scripts/common.sh@352 -- # local d=1 00:14:51.498 05:17:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.498 05:17:08 -- scripts/common.sh@354 -- # echo 1 00:14:51.498 05:17:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:51.498 05:17:08 -- scripts/common.sh@365 -- # decimal 2 00:14:51.498 05:17:08 -- scripts/common.sh@352 -- # local d=2 00:14:51.498 05:17:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.498 05:17:08 -- scripts/common.sh@354 -- # echo 2 00:14:51.498 05:17:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:51.498 05:17:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:51.498 05:17:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:51.498 05:17:08 -- scripts/common.sh@367 -- # return 0 00:14:51.498 05:17:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.498 05:17:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.498 --rc genhtml_branch_coverage=1 00:14:51.498 --rc genhtml_function_coverage=1 00:14:51.498 --rc genhtml_legend=1 00:14:51.498 --rc geninfo_all_blocks=1 00:14:51.498 --rc geninfo_unexecuted_blocks=1 00:14:51.498 00:14:51.498 ' 00:14:51.498 05:17:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.498 --rc genhtml_branch_coverage=1 00:14:51.498 --rc genhtml_function_coverage=1 00:14:51.498 --rc genhtml_legend=1 00:14:51.498 --rc geninfo_all_blocks=1 00:14:51.498 --rc geninfo_unexecuted_blocks=1 00:14:51.498 00:14:51.498 ' 00:14:51.498 05:17:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.498 --rc genhtml_branch_coverage=1 00:14:51.498 --rc genhtml_function_coverage=1 00:14:51.498 --rc genhtml_legend=1 00:14:51.498 --rc geninfo_all_blocks=1 00:14:51.498 --rc geninfo_unexecuted_blocks=1 00:14:51.498 00:14:51.498 ' 00:14:51.498 05:17:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.498 --rc genhtml_branch_coverage=1 00:14:51.498 --rc genhtml_function_coverage=1 00:14:51.498 --rc genhtml_legend=1 00:14:51.498 --rc geninfo_all_blocks=1 00:14:51.498 --rc geninfo_unexecuted_blocks=1 00:14:51.498 00:14:51.498 ' 00:14:51.498 05:17:08 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.498 05:17:08 -- nvmf/common.sh@7 -- # uname -s 00:14:51.498 05:17:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.498 05:17:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.498 05:17:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.498 05:17:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.498 05:17:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.498 05:17:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.498 05:17:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.498 05:17:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.498 05:17:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.498 05:17:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.757 05:17:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:51.757 05:17:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:51.757 05:17:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.757 05:17:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.757 05:17:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.757 05:17:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:51.757 05:17:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.757 05:17:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.757 05:17:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.757 05:17:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.757 05:17:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.758 05:17:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.758 05:17:08 -- paths/export.sh@5 -- # export PATH 00:14:51.758 05:17:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.758 05:17:08 -- nvmf/common.sh@46 -- # : 0 00:14:51.758 05:17:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.758 05:17:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.758 05:17:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.758 05:17:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.758 05:17:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.758 05:17:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.758 05:17:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.758 05:17:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.758 05:17:08 -- target/rpc.sh@11 -- # loops=5 00:14:51.758 05:17:08 -- target/rpc.sh@23 -- # nvmftestinit 00:14:51.758 05:17:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:51.758 05:17:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.758 05:17:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.758 05:17:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.758 05:17:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.758 05:17:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.758 05:17:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.758 05:17:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.758 05:17:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:51.758 05:17:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:51.758 05:17:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:51.758 05:17:08 -- common/autotest_common.sh@10 -- # set +x 00:14:58.333 05:17:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:58.333 05:17:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:58.333 05:17:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:58.333 05:17:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:58.333 05:17:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:58.333 05:17:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:58.333 05:17:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:58.333 05:17:14 -- nvmf/common.sh@294 -- # net_devs=() 00:14:58.333 05:17:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:58.333 05:17:14 -- nvmf/common.sh@295 -- # e810=() 00:14:58.333 05:17:14 -- nvmf/common.sh@295 -- # local -ga e810 00:14:58.333 05:17:14 -- nvmf/common.sh@296 -- # x722=() 00:14:58.333 05:17:14 -- nvmf/common.sh@296 -- # local -ga x722 00:14:58.333 05:17:14 -- nvmf/common.sh@297 -- # mlx=() 00:14:58.333 05:17:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:58.333 05:17:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.333 05:17:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:58.333 05:17:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:58.333 05:17:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:58.333 05:17:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:58.333 05:17:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:58.333 05:17:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.333 05:17:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:58.333 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:58.333 05:17:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:58.333 05:17:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.333 05:17:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:58.333 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:58.333 05:17:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:58.333 05:17:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:58.333 05:17:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.333 05:17:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.333 05:17:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.333 05:17:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.333 05:17:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:58.333 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:58.333 05:17:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.333 05:17:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.333 05:17:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.333 05:17:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.333 05:17:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.333 05:17:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:58.333 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:58.333 05:17:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.333 05:17:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:58.333 05:17:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:58.333 05:17:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:58.333 05:17:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:58.333 05:17:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:58.333 05:17:14 -- nvmf/common.sh@57 -- # uname 00:14:58.333 05:17:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:58.333 05:17:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:58.334 05:17:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:58.334 05:17:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:58.334 05:17:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:58.334 05:17:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:58.334 05:17:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:58.334 05:17:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:58.334 05:17:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:58.334 05:17:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:58.334 05:17:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:58.334 05:17:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:58.334 05:17:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:58.334 05:17:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:58.334 05:17:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:58.334 05:17:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:58.334 05:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@104 -- # continue 2 00:14:58.334 05:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@104 -- # continue 2 00:14:58.334 05:17:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:58.334 05:17:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.334 05:17:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:58.334 05:17:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:58.334 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:58.334 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:58.334 altname enp217s0f0np0 00:14:58.334 altname ens818f0np0 00:14:58.334 inet 192.168.100.8/24 scope global mlx_0_0 00:14:58.334 valid_lft forever preferred_lft forever 00:14:58.334 05:17:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:58.334 05:17:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.334 05:17:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:58.334 05:17:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:58.334 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:58.334 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:58.334 altname enp217s0f1np1 00:14:58.334 altname ens818f1np1 00:14:58.334 inet 192.168.100.9/24 scope global mlx_0_1 00:14:58.334 valid_lft forever preferred_lft forever 00:14:58.334 05:17:14 -- nvmf/common.sh@410 -- # return 0 00:14:58.334 05:17:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.334 05:17:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:58.334 05:17:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:58.334 05:17:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:58.334 05:17:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:58.334 05:17:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:58.334 05:17:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:58.334 05:17:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:58.334 05:17:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:58.334 05:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@104 -- # continue 2 00:14:58.334 05:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.334 05:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:58.334 05:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@104 -- # continue 2 00:14:58.334 05:17:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:58.334 05:17:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.334 05:17:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:58.334 05:17:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.334 05:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.334 05:17:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:58.334 192.168.100.9' 00:14:58.334 05:17:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:58.334 192.168.100.9' 00:14:58.334 05:17:14 -- nvmf/common.sh@445 -- # head -n 1 00:14:58.334 05:17:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:58.334 05:17:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:58.334 192.168.100.9' 00:14:58.334 05:17:14 -- nvmf/common.sh@446 -- # tail -n +2 00:14:58.334 05:17:14 -- nvmf/common.sh@446 -- # head -n 1 00:14:58.334 05:17:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:58.334 05:17:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:58.334 05:17:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:58.334 05:17:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:58.334 05:17:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:58.334 05:17:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:58.334 05:17:14 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:58.334 05:17:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:58.334 05:17:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.334 05:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.334 05:17:14 -- nvmf/common.sh@469 -- # nvmfpid=1755649 00:14:58.334 05:17:14 -- nvmf/common.sh@470 -- # waitforlisten 1755649 00:14:58.334 05:17:14 -- common/autotest_common.sh@829 -- # '[' -z 1755649 ']' 00:14:58.334 05:17:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.334 05:17:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.334 05:17:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.334 05:17:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.334 05:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.334 05:17:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.334 [2024-11-19 05:17:14.564049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:58.334 [2024-11-19 05:17:14.564096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.334 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.334 [2024-11-19 05:17:14.634334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.334 [2024-11-19 05:17:14.672521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.334 [2024-11-19 05:17:14.672637] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.334 [2024-11-19 05:17:14.672646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.334 [2024-11-19 05:17:14.672655] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.334 [2024-11-19 05:17:14.672701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.334 [2024-11-19 05:17:14.672818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.334 [2024-11-19 05:17:14.672888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.334 [2024-11-19 05:17:14.672889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.903 05:17:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.903 05:17:15 -- common/autotest_common.sh@862 -- # return 0 00:14:58.903 05:17:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.903 05:17:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.903 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:58.903 05:17:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.903 05:17:15 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:58.903 05:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.903 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:58.903 05:17:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.903 05:17:15 -- target/rpc.sh@26 -- # stats='{ 00:14:58.903 "tick_rate": 2500000000, 00:14:58.903 "poll_groups": [ 00:14:58.903 { 00:14:58.903 "name": "nvmf_tgt_poll_group_0", 00:14:58.903 "admin_qpairs": 0, 00:14:58.903 "io_qpairs": 0, 00:14:58.903 "current_admin_qpairs": 0, 00:14:58.903 "current_io_qpairs": 0, 00:14:58.903 "pending_bdev_io": 0, 00:14:58.903 "completed_nvme_io": 0, 00:14:58.903 "transports": [] 00:14:58.903 }, 00:14:58.903 { 00:14:58.903 "name": "nvmf_tgt_poll_group_1", 00:14:58.903 "admin_qpairs": 0, 00:14:58.903 "io_qpairs": 0, 00:14:58.903 "current_admin_qpairs": 0, 00:14:58.903 "current_io_qpairs": 0, 00:14:58.903 "pending_bdev_io": 0, 00:14:58.903 "completed_nvme_io": 0, 00:14:58.903 "transports": [] 00:14:58.903 }, 00:14:58.903 { 00:14:58.903 "name": "nvmf_tgt_poll_group_2", 00:14:58.903 "admin_qpairs": 0, 00:14:58.903 "io_qpairs": 0, 00:14:58.903 "current_admin_qpairs": 0, 00:14:58.903 "current_io_qpairs": 0, 00:14:58.903 "pending_bdev_io": 0, 00:14:58.903 "completed_nvme_io": 0, 00:14:58.903 "transports": [] 00:14:58.903 }, 00:14:58.903 { 00:14:58.903 "name": "nvmf_tgt_poll_group_3", 00:14:58.903 "admin_qpairs": 0, 00:14:58.903 "io_qpairs": 0, 00:14:58.903 "current_admin_qpairs": 0, 00:14:58.903 "current_io_qpairs": 0, 00:14:58.903 "pending_bdev_io": 0, 00:14:58.903 "completed_nvme_io": 0, 00:14:58.903 "transports": [] 00:14:58.903 } 00:14:58.903 ] 00:14:58.903 }' 00:14:58.903 05:17:15 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:58.903 05:17:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:58.903 05:17:15 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:58.903 05:17:15 -- target/rpc.sh@15 -- # wc -l 00:14:59.162 05:17:15 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:59.162 05:17:15 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:59.162 05:17:15 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:59.162 05:17:15 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:59.162 05:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.162 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.162 [2024-11-19 05:17:15.556331] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24ab270/0x24af760) succeed. 00:14:59.162 [2024-11-19 05:17:15.565558] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24ac860/0x24f0e00) succeed. 00:14:59.162 05:17:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.162 05:17:15 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:59.162 05:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.162 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.422 05:17:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.422 05:17:15 -- target/rpc.sh@33 -- # stats='{ 00:14:59.422 "tick_rate": 2500000000, 00:14:59.422 "poll_groups": [ 00:14:59.422 { 00:14:59.422 "name": "nvmf_tgt_poll_group_0", 00:14:59.422 "admin_qpairs": 0, 00:14:59.422 "io_qpairs": 0, 00:14:59.422 "current_admin_qpairs": 0, 00:14:59.422 "current_io_qpairs": 0, 00:14:59.422 "pending_bdev_io": 0, 00:14:59.422 "completed_nvme_io": 0, 00:14:59.422 "transports": [ 00:14:59.422 { 00:14:59.422 "trtype": "RDMA", 00:14:59.422 "pending_data_buffer": 0, 00:14:59.422 "devices": [ 00:14:59.422 { 00:14:59.422 "name": "mlx5_0", 00:14:59.422 "polls": 16015, 00:14:59.422 "idle_polls": 16015, 00:14:59.422 "completions": 0, 00:14:59.422 "requests": 0, 00:14:59.422 "request_latency": 0, 00:14:59.422 "pending_free_request": 0, 00:14:59.422 "pending_rdma_read": 0, 00:14:59.422 "pending_rdma_write": 0, 00:14:59.422 "pending_rdma_send": 0, 00:14:59.422 "total_send_wrs": 0, 00:14:59.422 "send_doorbell_updates": 0, 00:14:59.422 "total_recv_wrs": 4096, 00:14:59.422 "recv_doorbell_updates": 1 00:14:59.422 }, 00:14:59.422 { 00:14:59.422 "name": "mlx5_1", 00:14:59.422 "polls": 16015, 00:14:59.422 "idle_polls": 16015, 00:14:59.422 "completions": 0, 00:14:59.422 "requests": 0, 00:14:59.422 "request_latency": 0, 00:14:59.423 "pending_free_request": 0, 00:14:59.423 "pending_rdma_read": 0, 00:14:59.423 "pending_rdma_write": 0, 00:14:59.423 "pending_rdma_send": 0, 00:14:59.423 "total_send_wrs": 0, 00:14:59.423 "send_doorbell_updates": 0, 00:14:59.423 "total_recv_wrs": 4096, 00:14:59.423 "recv_doorbell_updates": 1 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 }, 00:14:59.423 { 00:14:59.423 "name": "nvmf_tgt_poll_group_1", 00:14:59.423 "admin_qpairs": 0, 00:14:59.423 "io_qpairs": 0, 00:14:59.423 "current_admin_qpairs": 0, 00:14:59.423 "current_io_qpairs": 0, 00:14:59.423 "pending_bdev_io": 0, 00:14:59.423 "completed_nvme_io": 0, 00:14:59.423 "transports": [ 00:14:59.423 { 00:14:59.423 "trtype": "RDMA", 00:14:59.423 "pending_data_buffer": 0, 00:14:59.423 "devices": [ 00:14:59.423 { 00:14:59.423 "name": "mlx5_0", 00:14:59.423 "polls": 10264, 00:14:59.423 "idle_polls": 10264, 00:14:59.423 "completions": 0, 00:14:59.423 "requests": 0, 00:14:59.423 "request_latency": 0, 00:14:59.423 "pending_free_request": 0, 00:14:59.423 "pending_rdma_read": 0, 00:14:59.423 "pending_rdma_write": 0, 00:14:59.423 "pending_rdma_send": 0, 00:14:59.423 "total_send_wrs": 0, 00:14:59.423 "send_doorbell_updates": 0, 00:14:59.423 "total_recv_wrs": 4096, 00:14:59.423 "recv_doorbell_updates": 1 00:14:59.423 }, 00:14:59.423 { 00:14:59.423 "name": "mlx5_1", 00:14:59.423 "polls": 10264, 00:14:59.423 "idle_polls": 10264, 00:14:59.423 "completions": 0, 00:14:59.423 "requests": 0, 00:14:59.423 "request_latency": 0, 00:14:59.423 "pending_free_request": 0, 00:14:59.423 "pending_rdma_read": 0, 00:14:59.423 "pending_rdma_write": 0, 00:14:59.423 "pending_rdma_send": 0, 00:14:59.423 "total_send_wrs": 0, 00:14:59.423 "send_doorbell_updates": 0, 00:14:59.423 "total_recv_wrs": 4096, 00:14:59.423 "recv_doorbell_updates": 1 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 }, 00:14:59.423 { 00:14:59.423 "name": "nvmf_tgt_poll_group_2", 00:14:59.423 "admin_qpairs": 0, 00:14:59.423 "io_qpairs": 0, 00:14:59.423 "current_admin_qpairs": 0, 00:14:59.423 "current_io_qpairs": 0, 00:14:59.423 "pending_bdev_io": 0, 00:14:59.423 "completed_nvme_io": 0, 00:14:59.423 "transports": [ 00:14:59.423 { 00:14:59.423 "trtype": "RDMA", 00:14:59.423 "pending_data_buffer": 0, 00:14:59.423 "devices": [ 00:14:59.423 { 00:14:59.423 "name": "mlx5_0", 00:14:59.423 "polls": 5775, 00:14:59.423 "idle_polls": 5775, 00:14:59.423 "completions": 0, 00:14:59.423 "requests": 0, 00:14:59.423 "request_latency": 0, 00:14:59.423 "pending_free_request": 0, 00:14:59.423 "pending_rdma_read": 0, 00:14:59.423 "pending_rdma_write": 0, 00:14:59.423 "pending_rdma_send": 0, 00:14:59.423 "total_send_wrs": 0, 00:14:59.423 "send_doorbell_updates": 0, 00:14:59.423 "total_recv_wrs": 4096, 00:14:59.423 "recv_doorbell_updates": 1 00:14:59.423 }, 00:14:59.423 { 00:14:59.423 "name": "mlx5_1", 00:14:59.423 "polls": 5775, 00:14:59.423 "idle_polls": 5775, 00:14:59.423 "completions": 0, 00:14:59.423 "requests": 0, 00:14:59.423 "request_latency": 0, 00:14:59.423 "pending_free_request": 0, 00:14:59.423 "pending_rdma_read": 0, 00:14:59.423 "pending_rdma_write": 0, 00:14:59.423 "pending_rdma_send": 0, 00:14:59.423 "total_send_wrs": 0, 00:14:59.423 "send_doorbell_updates": 0, 00:14:59.423 "total_recv_wrs": 4096, 00:14:59.423 "recv_doorbell_updates": 1 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 }, 00:14:59.423 { 00:14:59.423 "name": "nvmf_tgt_poll_group_3", 00:14:59.423 "admin_qpairs": 0, 00:14:59.423 "io_qpairs": 0, 00:14:59.423 "current_admin_qpairs": 0, 00:14:59.423 "current_io_qpairs": 0, 00:14:59.423 "pending_bdev_io": 0, 00:14:59.423 "completed_nvme_io": 0, 00:14:59.423 "transports": [ 00:14:59.423 { 00:14:59.423 "trtype": "RDMA", 00:14:59.423 "pending_data_buffer": 0, 00:14:59.423 "devices": [ 00:14:59.423 { 00:14:59.423 "name": "mlx5_0", 00:14:59.423 "polls": 918, 00:14:59.423 "idle_polls": 918, 00:14:59.423 "completions": 0, 00:14:59.423 "requests": 0, 00:14:59.423 "request_latency": 0, 00:14:59.423 "pending_free_request": 0, 00:14:59.423 "pending_rdma_read": 0, 00:14:59.423 "pending_rdma_write": 0, 00:14:59.423 "pending_rdma_send": 0, 00:14:59.423 "total_send_wrs": 0, 00:14:59.423 "send_doorbell_updates": 0, 00:14:59.423 "total_recv_wrs": 4096, 00:14:59.423 "recv_doorbell_updates": 1 00:14:59.423 }, 00:14:59.423 { 00:14:59.423 "name": "mlx5_1", 00:14:59.423 "polls": 918, 00:14:59.423 "idle_polls": 918, 00:14:59.423 "completions": 0, 00:14:59.423 "requests": 0, 00:14:59.423 "request_latency": 0, 00:14:59.423 "pending_free_request": 0, 00:14:59.423 "pending_rdma_read": 0, 00:14:59.423 "pending_rdma_write": 0, 00:14:59.423 "pending_rdma_send": 0, 00:14:59.423 "total_send_wrs": 0, 00:14:59.423 "send_doorbell_updates": 0, 00:14:59.423 "total_recv_wrs": 4096, 00:14:59.423 "recv_doorbell_updates": 1 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 } 00:14:59.423 ] 00:14:59.423 }' 00:14:59.423 05:17:15 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:59.423 05:17:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:59.423 05:17:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:59.423 05:17:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.423 05:17:15 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:59.423 05:17:15 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:59.423 05:17:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:59.423 05:17:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:59.423 05:17:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.423 05:17:15 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:59.423 05:17:15 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:59.423 05:17:15 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:59.423 05:17:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:59.423 05:17:15 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:59.423 05:17:15 -- target/rpc.sh@15 -- # wc -l 00:14:59.423 05:17:15 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:59.423 05:17:15 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:59.423 05:17:15 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:59.423 05:17:15 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:59.423 05:17:15 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:59.423 05:17:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:59.423 05:17:15 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:59.423 05:17:15 -- target/rpc.sh@15 -- # wc -l 00:14:59.423 05:17:15 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:59.423 05:17:15 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:59.423 05:17:15 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:59.423 05:17:15 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:59.423 05:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.423 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.423 Malloc1 00:14:59.423 05:17:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.423 05:17:15 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:59.423 05:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.423 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.683 05:17:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.683 05:17:15 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.683 05:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.683 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.683 05:17:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.683 05:17:15 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:59.683 05:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.683 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.683 05:17:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.683 05:17:16 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.683 05:17:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.683 05:17:16 -- common/autotest_common.sh@10 -- # set +x 00:14:59.683 [2024-11-19 05:17:16.008464] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.683 05:17:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.683 05:17:16 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:59.683 05:17:16 -- common/autotest_common.sh@650 -- # local es=0 00:14:59.683 05:17:16 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:59.683 05:17:16 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:59.683 05:17:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.683 05:17:16 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:59.683 05:17:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.683 05:17:16 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:59.683 05:17:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.683 05:17:16 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:59.683 05:17:16 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:59.683 05:17:16 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:59.683 [2024-11-19 05:17:16.054391] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:59.683 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:59.683 could not add new controller: failed to write to nvme-fabrics device 00:14:59.683 05:17:16 -- common/autotest_common.sh@653 -- # es=1 00:14:59.683 05:17:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.683 05:17:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.683 05:17:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.683 05:17:16 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:59.683 05:17:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.683 05:17:16 -- common/autotest_common.sh@10 -- # set +x 00:14:59.683 05:17:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.683 05:17:16 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:00.621 05:17:17 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.621 05:17:17 -- common/autotest_common.sh@1187 -- # local i=0 00:15:00.621 05:17:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.621 05:17:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:00.621 05:17:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:02.527 05:17:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:02.527 05:17:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:02.527 05:17:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.786 05:17:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:02.786 05:17:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.786 05:17:19 -- common/autotest_common.sh@1197 -- # return 0 00:15:02.786 05:17:19 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.722 05:17:20 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.722 05:17:20 -- common/autotest_common.sh@1208 -- # local i=0 00:15:03.722 05:17:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:03.722 05:17:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.722 05:17:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:03.722 05:17:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.722 05:17:20 -- common/autotest_common.sh@1220 -- # return 0 00:15:03.722 05:17:20 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:03.722 05:17:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.722 05:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:03.722 05:17:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.722 05:17:20 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:03.722 05:17:20 -- common/autotest_common.sh@650 -- # local es=0 00:15:03.722 05:17:20 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:03.722 05:17:20 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:03.722 05:17:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.722 05:17:20 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:03.722 05:17:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.722 05:17:20 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:03.722 05:17:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.722 05:17:20 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:03.722 05:17:20 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:03.722 05:17:20 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:03.722 [2024-11-19 05:17:20.156628] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:15:03.722 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:03.722 could not add new controller: failed to write to nvme-fabrics device 00:15:03.722 05:17:20 -- common/autotest_common.sh@653 -- # es=1 00:15:03.722 05:17:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:03.722 05:17:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:03.722 05:17:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:03.722 05:17:20 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:03.722 05:17:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.722 05:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:03.722 05:17:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.722 05:17:20 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:04.658 05:17:21 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.658 05:17:21 -- common/autotest_common.sh@1187 -- # local i=0 00:15:04.658 05:17:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.658 05:17:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:04.658 05:17:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:07.192 05:17:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:07.192 05:17:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:07.192 05:17:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.192 05:17:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:07.192 05:17:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.192 05:17:23 -- common/autotest_common.sh@1197 -- # return 0 00:15:07.192 05:17:23 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.760 05:17:24 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.760 05:17:24 -- common/autotest_common.sh@1208 -- # local i=0 00:15:07.760 05:17:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:07.760 05:17:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.760 05:17:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:07.760 05:17:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.760 05:17:24 -- common/autotest_common.sh@1220 -- # return 0 00:15:07.760 05:17:24 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.760 05:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 05:17:24 -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 05:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 05:17:24 -- target/rpc.sh@81 -- # seq 1 5 00:15:07.760 05:17:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:07.760 05:17:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:07.760 05:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 05:17:24 -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 05:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 05:17:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:07.760 05:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 05:17:24 -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 [2024-11-19 05:17:24.228653] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:07.760 05:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 05:17:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:07.760 05:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 05:17:24 -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 05:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 05:17:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:07.760 05:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 05:17:24 -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 05:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 05:17:24 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:08.697 05:17:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.697 05:17:25 -- common/autotest_common.sh@1187 -- # local i=0 00:15:08.697 05:17:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.697 05:17:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:08.697 05:17:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:11.319 05:17:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:11.319 05:17:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:11.319 05:17:27 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.319 05:17:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:11.319 05:17:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.319 05:17:27 -- common/autotest_common.sh@1197 -- # return 0 00:15:11.319 05:17:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.886 05:17:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.886 05:17:28 -- common/autotest_common.sh@1208 -- # local i=0 00:15:11.886 05:17:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:11.886 05:17:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.886 05:17:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:11.886 05:17:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.886 05:17:28 -- common/autotest_common.sh@1220 -- # return 0 00:15:11.886 05:17:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.886 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.886 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.886 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.886 05:17:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.886 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.886 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.886 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.886 05:17:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:11.886 05:17:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.886 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.886 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.886 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.886 05:17:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:11.886 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.886 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.886 [2024-11-19 05:17:28.269014] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:11.886 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.886 05:17:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:11.886 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.886 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.886 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.886 05:17:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.886 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.886 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.886 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.886 05:17:28 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:12.832 05:17:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.832 05:17:29 -- common/autotest_common.sh@1187 -- # local i=0 00:15:12.832 05:17:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.832 05:17:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:12.832 05:17:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:14.736 05:17:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:14.736 05:17:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:14.736 05:17:31 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.736 05:17:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:14.736 05:17:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.736 05:17:31 -- common/autotest_common.sh@1197 -- # return 0 00:15:14.736 05:17:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.674 05:17:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.674 05:17:32 -- common/autotest_common.sh@1208 -- # local i=0 00:15:15.674 05:17:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:15.674 05:17:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.674 05:17:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:15.674 05:17:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.934 05:17:32 -- common/autotest_common.sh@1220 -- # return 0 00:15:15.934 05:17:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.934 05:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.934 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:15.934 05:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.934 05:17:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.934 05:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.934 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:15.934 05:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.934 05:17:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:15.934 05:17:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.934 05:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.934 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:15.934 05:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.934 05:17:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:15.934 05:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.934 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:15.934 [2024-11-19 05:17:32.281263] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:15.934 05:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.934 05:17:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:15.934 05:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.934 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:15.934 05:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.934 05:17:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.934 05:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.934 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:15.934 05:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.934 05:17:32 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:16.871 05:17:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.871 05:17:33 -- common/autotest_common.sh@1187 -- # local i=0 00:15:16.871 05:17:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.871 05:17:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:16.871 05:17:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:18.774 05:17:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:18.774 05:17:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:18.774 05:17:35 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.774 05:17:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:18.774 05:17:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.774 05:17:35 -- common/autotest_common.sh@1197 -- # return 0 00:15:18.774 05:17:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.711 05:17:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.711 05:17:36 -- common/autotest_common.sh@1208 -- # local i=0 00:15:19.711 05:17:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:19.711 05:17:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.711 05:17:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:19.711 05:17:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.970 05:17:36 -- common/autotest_common.sh@1220 -- # return 0 00:15:19.970 05:17:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.970 05:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.970 05:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 05:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.970 05:17:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.970 05:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.970 05:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 05:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.970 05:17:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:19.970 05:17:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.970 05:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.970 05:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 05:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.970 05:17:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:19.970 05:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.970 05:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 [2024-11-19 05:17:36.312318] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:19.970 05:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.970 05:17:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:19.970 05:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.970 05:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 05:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.970 05:17:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.970 05:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.970 05:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 05:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.970 05:17:36 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:20.907 05:17:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:20.907 05:17:37 -- common/autotest_common.sh@1187 -- # local i=0 00:15:20.907 05:17:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.907 05:17:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:20.907 05:17:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:22.809 05:17:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:22.809 05:17:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:22.809 05:17:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.809 05:17:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:22.809 05:17:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.809 05:17:39 -- common/autotest_common.sh@1197 -- # return 0 00:15:22.809 05:17:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.744 05:17:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:23.744 05:17:40 -- common/autotest_common.sh@1208 -- # local i=0 00:15:23.745 05:17:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:23.745 05:17:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.745 05:17:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:24.003 05:17:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.003 05:17:40 -- common/autotest_common.sh@1220 -- # return 0 00:15:24.003 05:17:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.003 05:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 05:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 05:17:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.003 05:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 05:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 05:17:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:24.003 05:17:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.003 05:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 05:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 05:17:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:24.003 05:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 [2024-11-19 05:17:40.353576] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:24.003 05:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 05:17:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:24.003 05:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 05:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 05:17:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.003 05:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 05:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 05:17:40 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:25.042 05:17:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.042 05:17:41 -- common/autotest_common.sh@1187 -- # local i=0 00:15:25.042 05:17:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.042 05:17:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:25.042 05:17:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:26.994 05:17:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:26.994 05:17:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:26.994 05:17:43 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.994 05:17:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:26.994 05:17:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.994 05:17:43 -- common/autotest_common.sh@1197 -- # return 0 00:15:26.994 05:17:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.933 05:17:44 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.933 05:17:44 -- common/autotest_common.sh@1208 -- # local i=0 00:15:27.933 05:17:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:27.933 05:17:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.933 05:17:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:27.933 05:17:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.933 05:17:44 -- common/autotest_common.sh@1220 -- # return 0 00:15:27.933 05:17:44 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:27.933 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@99 -- # seq 1 5 00:15:27.934 05:17:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:27.934 05:17:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 [2024-11-19 05:17:44.372036] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:27.934 05:17:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 [2024-11-19 05:17:44.420239] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:27.934 05:17:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 [2024-11-19 05:17:44.468390] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.934 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.934 05:17:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.934 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.934 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:28.194 05:17:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 [2024-11-19 05:17:44.516563] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:28.194 05:17:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.194 05:17:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:28.194 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.194 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.194 [2024-11-19 05:17:44.564711] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:28.195 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.195 05:17:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.195 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.195 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.195 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.195 05:17:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.195 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.195 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.195 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.195 05:17:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.195 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.195 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.195 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.195 05:17:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.195 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.195 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.195 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.195 05:17:44 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:28.195 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.195 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.195 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.195 05:17:44 -- target/rpc.sh@110 -- # stats='{ 00:15:28.195 "tick_rate": 2500000000, 00:15:28.195 "poll_groups": [ 00:15:28.195 { 00:15:28.195 "name": "nvmf_tgt_poll_group_0", 00:15:28.195 "admin_qpairs": 2, 00:15:28.195 "io_qpairs": 27, 00:15:28.195 "current_admin_qpairs": 0, 00:15:28.195 "current_io_qpairs": 0, 00:15:28.195 "pending_bdev_io": 0, 00:15:28.195 "completed_nvme_io": 78, 00:15:28.195 "transports": [ 00:15:28.195 { 00:15:28.195 "trtype": "RDMA", 00:15:28.195 "pending_data_buffer": 0, 00:15:28.195 "devices": [ 00:15:28.195 { 00:15:28.195 "name": "mlx5_0", 00:15:28.195 "polls": 3463004, 00:15:28.195 "idle_polls": 3462763, 00:15:28.195 "completions": 263, 00:15:28.195 "requests": 131, 00:15:28.195 "request_latency": 21331564, 00:15:28.195 "pending_free_request": 0, 00:15:28.195 "pending_rdma_read": 0, 00:15:28.195 "pending_rdma_write": 0, 00:15:28.195 "pending_rdma_send": 0, 00:15:28.195 "total_send_wrs": 207, 00:15:28.195 "send_doorbell_updates": 120, 00:15:28.195 "total_recv_wrs": 4227, 00:15:28.195 "recv_doorbell_updates": 120 00:15:28.195 }, 00:15:28.195 { 00:15:28.195 "name": "mlx5_1", 00:15:28.195 "polls": 3463004, 00:15:28.195 "idle_polls": 3463004, 00:15:28.195 "completions": 0, 00:15:28.195 "requests": 0, 00:15:28.195 "request_latency": 0, 00:15:28.195 "pending_free_request": 0, 00:15:28.195 "pending_rdma_read": 0, 00:15:28.195 "pending_rdma_write": 0, 00:15:28.195 "pending_rdma_send": 0, 00:15:28.195 "total_send_wrs": 0, 00:15:28.195 "send_doorbell_updates": 0, 00:15:28.195 "total_recv_wrs": 4096, 00:15:28.195 "recv_doorbell_updates": 1 00:15:28.195 } 00:15:28.195 ] 00:15:28.195 } 00:15:28.195 ] 00:15:28.195 }, 00:15:28.195 { 00:15:28.195 "name": "nvmf_tgt_poll_group_1", 00:15:28.195 "admin_qpairs": 2, 00:15:28.195 "io_qpairs": 26, 00:15:28.195 "current_admin_qpairs": 0, 00:15:28.195 "current_io_qpairs": 0, 00:15:28.195 "pending_bdev_io": 0, 00:15:28.195 "completed_nvme_io": 125, 00:15:28.195 "transports": [ 00:15:28.195 { 00:15:28.195 "trtype": "RDMA", 00:15:28.195 "pending_data_buffer": 0, 00:15:28.195 "devices": [ 00:15:28.195 { 00:15:28.195 "name": "mlx5_0", 00:15:28.195 "polls": 3414915, 00:15:28.195 "idle_polls": 3414598, 00:15:28.195 "completions": 356, 00:15:28.195 "requests": 178, 00:15:28.195 "request_latency": 34233926, 00:15:28.195 "pending_free_request": 0, 00:15:28.195 "pending_rdma_read": 0, 00:15:28.195 "pending_rdma_write": 0, 00:15:28.195 "pending_rdma_send": 0, 00:15:28.195 "total_send_wrs": 302, 00:15:28.195 "send_doorbell_updates": 154, 00:15:28.195 "total_recv_wrs": 4274, 00:15:28.195 "recv_doorbell_updates": 155 00:15:28.195 }, 00:15:28.195 { 00:15:28.195 "name": "mlx5_1", 00:15:28.195 "polls": 3414915, 00:15:28.195 "idle_polls": 3414915, 00:15:28.195 "completions": 0, 00:15:28.195 "requests": 0, 00:15:28.195 "request_latency": 0, 00:15:28.195 "pending_free_request": 0, 00:15:28.195 "pending_rdma_read": 0, 00:15:28.195 "pending_rdma_write": 0, 00:15:28.195 "pending_rdma_send": 0, 00:15:28.195 "total_send_wrs": 0, 00:15:28.195 "send_doorbell_updates": 0, 00:15:28.195 "total_recv_wrs": 4096, 00:15:28.195 "recv_doorbell_updates": 1 00:15:28.195 } 00:15:28.195 ] 00:15:28.195 } 00:15:28.195 ] 00:15:28.195 }, 00:15:28.195 { 00:15:28.195 "name": "nvmf_tgt_poll_group_2", 00:15:28.195 "admin_qpairs": 1, 00:15:28.195 "io_qpairs": 26, 00:15:28.195 "current_admin_qpairs": 0, 00:15:28.195 "current_io_qpairs": 0, 00:15:28.195 "pending_bdev_io": 0, 00:15:28.195 "completed_nvme_io": 126, 00:15:28.195 "transports": [ 00:15:28.195 { 00:15:28.195 "trtype": "RDMA", 00:15:28.195 "pending_data_buffer": 0, 00:15:28.195 "devices": [ 00:15:28.195 { 00:15:28.195 "name": "mlx5_0", 00:15:28.195 "polls": 3492868, 00:15:28.195 "idle_polls": 3492600, 00:15:28.195 "completions": 307, 00:15:28.195 "requests": 153, 00:15:28.195 "request_latency": 32522910, 00:15:28.195 "pending_free_request": 0, 00:15:28.195 "pending_rdma_read": 0, 00:15:28.195 "pending_rdma_write": 0, 00:15:28.195 "pending_rdma_send": 0, 00:15:28.195 "total_send_wrs": 266, 00:15:28.195 "send_doorbell_updates": 131, 00:15:28.195 "total_recv_wrs": 4249, 00:15:28.195 "recv_doorbell_updates": 131 00:15:28.195 }, 00:15:28.195 { 00:15:28.195 "name": "mlx5_1", 00:15:28.195 "polls": 3492868, 00:15:28.195 "idle_polls": 3492868, 00:15:28.195 "completions": 0, 00:15:28.195 "requests": 0, 00:15:28.195 "request_latency": 0, 00:15:28.195 "pending_free_request": 0, 00:15:28.195 "pending_rdma_read": 0, 00:15:28.195 "pending_rdma_write": 0, 00:15:28.195 "pending_rdma_send": 0, 00:15:28.195 "total_send_wrs": 0, 00:15:28.195 "send_doorbell_updates": 0, 00:15:28.195 "total_recv_wrs": 4096, 00:15:28.195 "recv_doorbell_updates": 1 00:15:28.195 } 00:15:28.195 ] 00:15:28.195 } 00:15:28.195 ] 00:15:28.195 }, 00:15:28.195 { 00:15:28.195 "name": "nvmf_tgt_poll_group_3", 00:15:28.195 "admin_qpairs": 2, 00:15:28.195 "io_qpairs": 26, 00:15:28.195 "current_admin_qpairs": 0, 00:15:28.195 "current_io_qpairs": 0, 00:15:28.195 "pending_bdev_io": 0, 00:15:28.195 "completed_nvme_io": 126, 00:15:28.195 "transports": [ 00:15:28.195 { 00:15:28.195 "trtype": "RDMA", 00:15:28.195 "pending_data_buffer": 0, 00:15:28.195 "devices": [ 00:15:28.195 { 00:15:28.195 "name": "mlx5_0", 00:15:28.195 "polls": 2715300, 00:15:28.195 "idle_polls": 2714981, 00:15:28.195 "completions": 360, 00:15:28.196 "requests": 180, 00:15:28.196 "request_latency": 36230552, 00:15:28.196 "pending_free_request": 0, 00:15:28.196 "pending_rdma_read": 0, 00:15:28.196 "pending_rdma_write": 0, 00:15:28.196 "pending_rdma_send": 0, 00:15:28.196 "total_send_wrs": 306, 00:15:28.196 "send_doorbell_updates": 157, 00:15:28.196 "total_recv_wrs": 4276, 00:15:28.196 "recv_doorbell_updates": 158 00:15:28.196 }, 00:15:28.196 { 00:15:28.196 "name": "mlx5_1", 00:15:28.196 "polls": 2715300, 00:15:28.196 "idle_polls": 2715300, 00:15:28.196 "completions": 0, 00:15:28.196 "requests": 0, 00:15:28.196 "request_latency": 0, 00:15:28.196 "pending_free_request": 0, 00:15:28.196 "pending_rdma_read": 0, 00:15:28.196 "pending_rdma_write": 0, 00:15:28.196 "pending_rdma_send": 0, 00:15:28.196 "total_send_wrs": 0, 00:15:28.196 "send_doorbell_updates": 0, 00:15:28.196 "total_recv_wrs": 4096, 00:15:28.196 "recv_doorbell_updates": 1 00:15:28.196 } 00:15:28.196 ] 00:15:28.196 } 00:15:28.196 ] 00:15:28.196 } 00:15:28.196 ] 00:15:28.196 }' 00:15:28.196 05:17:44 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:28.196 05:17:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:28.196 05:17:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:28.196 05:17:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:28.196 05:17:44 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:28.196 05:17:44 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:28.196 05:17:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:28.196 05:17:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:28.196 05:17:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:28.196 05:17:44 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:28.196 05:17:44 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:28.196 05:17:44 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:28.196 05:17:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:28.196 05:17:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:28.196 05:17:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:28.456 05:17:44 -- target/rpc.sh@117 -- # (( 1286 > 0 )) 00:15:28.456 05:17:44 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:28.456 05:17:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:28.456 05:17:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:28.456 05:17:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:28.456 05:17:44 -- target/rpc.sh@118 -- # (( 124318952 > 0 )) 00:15:28.456 05:17:44 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:28.456 05:17:44 -- target/rpc.sh@123 -- # nvmftestfini 00:15:28.456 05:17:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:28.456 05:17:44 -- nvmf/common.sh@116 -- # sync 00:15:28.456 05:17:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:28.456 05:17:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:28.456 05:17:44 -- nvmf/common.sh@119 -- # set +e 00:15:28.456 05:17:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:28.456 05:17:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:28.456 rmmod nvme_rdma 00:15:28.456 rmmod nvme_fabrics 00:15:28.456 05:17:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:28.456 05:17:44 -- nvmf/common.sh@123 -- # set -e 00:15:28.456 05:17:44 -- nvmf/common.sh@124 -- # return 0 00:15:28.456 05:17:44 -- nvmf/common.sh@477 -- # '[' -n 1755649 ']' 00:15:28.456 05:17:44 -- nvmf/common.sh@478 -- # killprocess 1755649 00:15:28.456 05:17:44 -- common/autotest_common.sh@936 -- # '[' -z 1755649 ']' 00:15:28.456 05:17:44 -- common/autotest_common.sh@940 -- # kill -0 1755649 00:15:28.456 05:17:44 -- common/autotest_common.sh@941 -- # uname 00:15:28.456 05:17:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.456 05:17:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1755649 00:15:28.456 05:17:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:28.456 05:17:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:28.456 05:17:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1755649' 00:15:28.456 killing process with pid 1755649 00:15:28.456 05:17:44 -- common/autotest_common.sh@955 -- # kill 1755649 00:15:28.456 05:17:44 -- common/autotest_common.sh@960 -- # wait 1755649 00:15:28.715 05:17:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:28.715 05:17:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:28.715 00:15:28.715 real 0m37.360s 00:15:28.715 user 2m3.868s 00:15:28.715 sys 0m6.763s 00:15:28.715 05:17:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:28.715 05:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:28.715 ************************************ 00:15:28.715 END TEST nvmf_rpc 00:15:28.715 ************************************ 00:15:28.715 05:17:45 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:28.715 05:17:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:28.715 05:17:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.715 05:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:28.715 ************************************ 00:15:28.715 START TEST nvmf_invalid 00:15:28.715 ************************************ 00:15:28.715 05:17:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:28.975 * Looking for test storage... 00:15:28.975 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.975 05:17:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:28.975 05:17:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:28.975 05:17:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:28.975 05:17:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:28.975 05:17:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:28.975 05:17:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:28.975 05:17:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:28.975 05:17:45 -- scripts/common.sh@335 -- # IFS=.-: 00:15:28.975 05:17:45 -- scripts/common.sh@335 -- # read -ra ver1 00:15:28.975 05:17:45 -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.975 05:17:45 -- scripts/common.sh@336 -- # read -ra ver2 00:15:28.975 05:17:45 -- scripts/common.sh@337 -- # local 'op=<' 00:15:28.975 05:17:45 -- scripts/common.sh@339 -- # ver1_l=2 00:15:28.975 05:17:45 -- scripts/common.sh@340 -- # ver2_l=1 00:15:28.975 05:17:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:28.975 05:17:45 -- scripts/common.sh@343 -- # case "$op" in 00:15:28.975 05:17:45 -- scripts/common.sh@344 -- # : 1 00:15:28.975 05:17:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:28.975 05:17:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.975 05:17:45 -- scripts/common.sh@364 -- # decimal 1 00:15:28.975 05:17:45 -- scripts/common.sh@352 -- # local d=1 00:15:28.975 05:17:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.975 05:17:45 -- scripts/common.sh@354 -- # echo 1 00:15:28.975 05:17:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:28.975 05:17:45 -- scripts/common.sh@365 -- # decimal 2 00:15:28.975 05:17:45 -- scripts/common.sh@352 -- # local d=2 00:15:28.975 05:17:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.975 05:17:45 -- scripts/common.sh@354 -- # echo 2 00:15:28.975 05:17:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:28.975 05:17:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:28.976 05:17:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:28.976 05:17:45 -- scripts/common.sh@367 -- # return 0 00:15:28.976 05:17:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.976 05:17:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.976 --rc genhtml_branch_coverage=1 00:15:28.976 --rc genhtml_function_coverage=1 00:15:28.976 --rc genhtml_legend=1 00:15:28.976 --rc geninfo_all_blocks=1 00:15:28.976 --rc geninfo_unexecuted_blocks=1 00:15:28.976 00:15:28.976 ' 00:15:28.976 05:17:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.976 --rc genhtml_branch_coverage=1 00:15:28.976 --rc genhtml_function_coverage=1 00:15:28.976 --rc genhtml_legend=1 00:15:28.976 --rc geninfo_all_blocks=1 00:15:28.976 --rc geninfo_unexecuted_blocks=1 00:15:28.976 00:15:28.976 ' 00:15:28.976 05:17:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.976 --rc genhtml_branch_coverage=1 00:15:28.976 --rc genhtml_function_coverage=1 00:15:28.976 --rc genhtml_legend=1 00:15:28.976 --rc geninfo_all_blocks=1 00:15:28.976 --rc geninfo_unexecuted_blocks=1 00:15:28.976 00:15:28.976 ' 00:15:28.976 05:17:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.976 --rc genhtml_branch_coverage=1 00:15:28.976 --rc genhtml_function_coverage=1 00:15:28.976 --rc genhtml_legend=1 00:15:28.976 --rc geninfo_all_blocks=1 00:15:28.976 --rc geninfo_unexecuted_blocks=1 00:15:28.976 00:15:28.976 ' 00:15:28.976 05:17:45 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.976 05:17:45 -- nvmf/common.sh@7 -- # uname -s 00:15:28.976 05:17:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.976 05:17:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.976 05:17:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.976 05:17:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.976 05:17:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.976 05:17:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.976 05:17:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.976 05:17:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.976 05:17:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.976 05:17:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.976 05:17:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:28.976 05:17:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:28.976 05:17:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.976 05:17:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.976 05:17:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.976 05:17:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:28.976 05:17:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.976 05:17:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.976 05:17:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.976 05:17:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.976 05:17:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.976 05:17:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.976 05:17:45 -- paths/export.sh@5 -- # export PATH 00:15:28.976 05:17:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.976 05:17:45 -- nvmf/common.sh@46 -- # : 0 00:15:28.976 05:17:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:28.976 05:17:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:28.976 05:17:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:28.976 05:17:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.976 05:17:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.976 05:17:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:28.976 05:17:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:28.976 05:17:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:28.976 05:17:45 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:28.976 05:17:45 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:28.976 05:17:45 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:28.976 05:17:45 -- target/invalid.sh@14 -- # target=foobar 00:15:28.976 05:17:45 -- target/invalid.sh@16 -- # RANDOM=0 00:15:28.976 05:17:45 -- target/invalid.sh@34 -- # nvmftestinit 00:15:28.976 05:17:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:28.976 05:17:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.976 05:17:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:28.976 05:17:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:28.976 05:17:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:28.976 05:17:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.976 05:17:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.976 05:17:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.976 05:17:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:28.976 05:17:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:28.976 05:17:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:28.976 05:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:35.570 05:17:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.570 05:17:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:35.570 05:17:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:35.570 05:17:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:35.570 05:17:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:35.570 05:17:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:35.570 05:17:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:35.570 05:17:52 -- nvmf/common.sh@294 -- # net_devs=() 00:15:35.570 05:17:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:35.570 05:17:52 -- nvmf/common.sh@295 -- # e810=() 00:15:35.570 05:17:52 -- nvmf/common.sh@295 -- # local -ga e810 00:15:35.570 05:17:52 -- nvmf/common.sh@296 -- # x722=() 00:15:35.570 05:17:52 -- nvmf/common.sh@296 -- # local -ga x722 00:15:35.570 05:17:52 -- nvmf/common.sh@297 -- # mlx=() 00:15:35.570 05:17:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:35.570 05:17:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.570 05:17:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:35.570 05:17:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:35.570 05:17:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:35.570 05:17:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:35.570 05:17:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:35.570 05:17:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.570 05:17:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:35.570 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:35.570 05:17:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.570 05:17:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.570 05:17:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:35.570 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:35.570 05:17:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.570 05:17:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:35.570 05:17:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:35.570 05:17:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.571 05:17:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.571 05:17:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.571 05:17:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.571 05:17:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:35.571 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:35.571 05:17:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.571 05:17:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.571 05:17:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.571 05:17:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.571 05:17:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.571 05:17:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:35.571 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:35.571 05:17:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.571 05:17:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:35.571 05:17:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:35.571 05:17:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:35.571 05:17:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:35.571 05:17:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:35.571 05:17:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:35.571 05:17:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:35.571 05:17:52 -- nvmf/common.sh@57 -- # uname 00:15:35.571 05:17:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:35.571 05:17:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:35.571 05:17:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:35.571 05:17:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:35.571 05:17:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:35.830 05:17:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:35.830 05:17:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:35.830 05:17:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:35.830 05:17:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:35.830 05:17:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:35.830 05:17:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:35.830 05:17:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.830 05:17:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:35.830 05:17:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:35.830 05:17:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.830 05:17:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:35.830 05:17:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:35.830 05:17:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.830 05:17:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.830 05:17:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:35.830 05:17:52 -- nvmf/common.sh@104 -- # continue 2 00:15:35.830 05:17:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:35.830 05:17:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.830 05:17:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.830 05:17:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.830 05:17:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.830 05:17:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:35.830 05:17:52 -- nvmf/common.sh@104 -- # continue 2 00:15:35.830 05:17:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:35.830 05:17:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:35.830 05:17:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:35.830 05:17:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:35.830 05:17:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:35.830 05:17:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:35.830 05:17:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:35.830 05:17:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:35.830 05:17:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:35.831 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.831 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:35.831 altname enp217s0f0np0 00:15:35.831 altname ens818f0np0 00:15:35.831 inet 192.168.100.8/24 scope global mlx_0_0 00:15:35.831 valid_lft forever preferred_lft forever 00:15:35.831 05:17:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:35.831 05:17:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:35.831 05:17:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:35.831 05:17:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:35.831 05:17:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:35.831 05:17:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:35.831 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.831 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:35.831 altname enp217s0f1np1 00:15:35.831 altname ens818f1np1 00:15:35.831 inet 192.168.100.9/24 scope global mlx_0_1 00:15:35.831 valid_lft forever preferred_lft forever 00:15:35.831 05:17:52 -- nvmf/common.sh@410 -- # return 0 00:15:35.831 05:17:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:35.831 05:17:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:35.831 05:17:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:35.831 05:17:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:35.831 05:17:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:35.831 05:17:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.831 05:17:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:35.831 05:17:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:35.831 05:17:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.831 05:17:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:35.831 05:17:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:35.831 05:17:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.831 05:17:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.831 05:17:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:35.831 05:17:52 -- nvmf/common.sh@104 -- # continue 2 00:15:35.831 05:17:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:35.831 05:17:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.831 05:17:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.831 05:17:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.831 05:17:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.831 05:17:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:35.831 05:17:52 -- nvmf/common.sh@104 -- # continue 2 00:15:35.831 05:17:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:35.831 05:17:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:35.831 05:17:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:35.831 05:17:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:35.831 05:17:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:35.831 05:17:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:35.831 05:17:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:35.831 05:17:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:35.831 192.168.100.9' 00:15:35.831 05:17:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:35.831 192.168.100.9' 00:15:35.831 05:17:52 -- nvmf/common.sh@445 -- # head -n 1 00:15:35.831 05:17:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:35.831 05:17:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:35.831 192.168.100.9' 00:15:35.831 05:17:52 -- nvmf/common.sh@446 -- # tail -n +2 00:15:35.831 05:17:52 -- nvmf/common.sh@446 -- # head -n 1 00:15:35.831 05:17:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:35.831 05:17:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:35.831 05:17:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:35.831 05:17:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:35.831 05:17:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:35.831 05:17:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:35.831 05:17:52 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:35.831 05:17:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:35.831 05:17:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.831 05:17:52 -- common/autotest_common.sh@10 -- # set +x 00:15:35.831 05:17:52 -- nvmf/common.sh@469 -- # nvmfpid=1764480 00:15:35.831 05:17:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.831 05:17:52 -- nvmf/common.sh@470 -- # waitforlisten 1764480 00:15:35.831 05:17:52 -- common/autotest_common.sh@829 -- # '[' -z 1764480 ']' 00:15:35.831 05:17:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.831 05:17:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.831 05:17:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.831 05:17:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.831 05:17:52 -- common/autotest_common.sh@10 -- # set +x 00:15:35.831 [2024-11-19 05:17:52.383103] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.831 [2024-11-19 05:17:52.383154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.091 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.091 [2024-11-19 05:17:52.454817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.091 [2024-11-19 05:17:52.492933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:36.091 [2024-11-19 05:17:52.493048] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.091 [2024-11-19 05:17:52.493059] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.091 [2024-11-19 05:17:52.493067] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.091 [2024-11-19 05:17:52.493113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.091 [2024-11-19 05:17:52.493221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.091 [2024-11-19 05:17:52.493283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.091 [2024-11-19 05:17:52.493284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.660 05:17:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.660 05:17:53 -- common/autotest_common.sh@862 -- # return 0 00:15:36.660 05:17:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:36.660 05:17:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.660 05:17:53 -- common/autotest_common.sh@10 -- # set +x 00:15:36.920 05:17:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.920 05:17:53 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:36.920 05:17:53 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21284 00:15:36.920 [2024-11-19 05:17:53.422503] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:36.920 05:17:53 -- target/invalid.sh@40 -- # out='request: 00:15:36.920 { 00:15:36.920 "nqn": "nqn.2016-06.io.spdk:cnode21284", 00:15:36.920 "tgt_name": "foobar", 00:15:36.920 "method": "nvmf_create_subsystem", 00:15:36.920 "req_id": 1 00:15:36.920 } 00:15:36.920 Got JSON-RPC error response 00:15:36.920 response: 00:15:36.920 { 00:15:36.920 "code": -32603, 00:15:36.920 "message": "Unable to find target foobar" 00:15:36.920 }' 00:15:36.920 05:17:53 -- target/invalid.sh@41 -- # [[ request: 00:15:36.920 { 00:15:36.920 "nqn": "nqn.2016-06.io.spdk:cnode21284", 00:15:36.920 "tgt_name": "foobar", 00:15:36.920 "method": "nvmf_create_subsystem", 00:15:36.920 "req_id": 1 00:15:36.920 } 00:15:36.920 Got JSON-RPC error response 00:15:36.920 response: 00:15:36.920 { 00:15:36.920 "code": -32603, 00:15:36.920 "message": "Unable to find target foobar" 00:15:36.920 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:36.920 05:17:53 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:36.920 05:17:53 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10756 00:15:37.180 [2024-11-19 05:17:53.619255] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10756: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:37.180 05:17:53 -- target/invalid.sh@45 -- # out='request: 00:15:37.180 { 00:15:37.180 "nqn": "nqn.2016-06.io.spdk:cnode10756", 00:15:37.180 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:37.180 "method": "nvmf_create_subsystem", 00:15:37.180 "req_id": 1 00:15:37.180 } 00:15:37.180 Got JSON-RPC error response 00:15:37.180 response: 00:15:37.180 { 00:15:37.180 "code": -32602, 00:15:37.180 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:37.180 }' 00:15:37.180 05:17:53 -- target/invalid.sh@46 -- # [[ request: 00:15:37.180 { 00:15:37.180 "nqn": "nqn.2016-06.io.spdk:cnode10756", 00:15:37.180 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:37.180 "method": "nvmf_create_subsystem", 00:15:37.180 "req_id": 1 00:15:37.180 } 00:15:37.180 Got JSON-RPC error response 00:15:37.180 response: 00:15:37.180 { 00:15:37.180 "code": -32602, 00:15:37.180 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:37.180 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:37.180 05:17:53 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:37.180 05:17:53 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode887 00:15:37.440 [2024-11-19 05:17:53.815827] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode887: invalid model number 'SPDK_Controller' 00:15:37.440 05:17:53 -- target/invalid.sh@50 -- # out='request: 00:15:37.440 { 00:15:37.440 "nqn": "nqn.2016-06.io.spdk:cnode887", 00:15:37.440 "model_number": "SPDK_Controller\u001f", 00:15:37.440 "method": "nvmf_create_subsystem", 00:15:37.440 "req_id": 1 00:15:37.440 } 00:15:37.440 Got JSON-RPC error response 00:15:37.440 response: 00:15:37.440 { 00:15:37.440 "code": -32602, 00:15:37.440 "message": "Invalid MN SPDK_Controller\u001f" 00:15:37.440 }' 00:15:37.440 05:17:53 -- target/invalid.sh@51 -- # [[ request: 00:15:37.440 { 00:15:37.440 "nqn": "nqn.2016-06.io.spdk:cnode887", 00:15:37.440 "model_number": "SPDK_Controller\u001f", 00:15:37.440 "method": "nvmf_create_subsystem", 00:15:37.440 "req_id": 1 00:15:37.440 } 00:15:37.440 Got JSON-RPC error response 00:15:37.440 response: 00:15:37.440 { 00:15:37.440 "code": -32602, 00:15:37.440 "message": "Invalid MN SPDK_Controller\u001f" 00:15:37.440 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:37.440 05:17:53 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:37.440 05:17:53 -- target/invalid.sh@19 -- # local length=21 ll 00:15:37.440 05:17:53 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:37.440 05:17:53 -- target/invalid.sh@21 -- # local chars 00:15:37.440 05:17:53 -- target/invalid.sh@22 -- # local string 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 46 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=. 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 64 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=@ 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 111 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=o 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 101 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=e 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 56 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=8 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 44 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=, 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 126 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+='~' 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 72 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=H 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 126 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+='~' 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 40 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+='(' 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 50 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=2 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 103 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=g 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 42 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+='*' 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 108 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=l 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 76 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=L 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 97 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=a 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 118 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=v 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 113 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=q 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # printf %x 32 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:37.440 05:17:53 -- target/invalid.sh@25 -- # string+=' ' 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.440 05:17:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.700 05:17:53 -- target/invalid.sh@25 -- # printf %x 103 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # string+=g 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # printf %x 83 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # string+=S 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.700 05:17:54 -- target/invalid.sh@28 -- # [[ . == \- ]] 00:15:37.700 05:17:54 -- target/invalid.sh@31 -- # echo '.@oe8,~H~(2g*lLavq gS' 00:15:37.700 05:17:54 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '.@oe8,~H~(2g*lLavq gS' nqn.2016-06.io.spdk:cnode24051 00:15:37.700 [2024-11-19 05:17:54.177051] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24051: invalid serial number '.@oe8,~H~(2g*lLavq gS' 00:15:37.700 05:17:54 -- target/invalid.sh@54 -- # out='request: 00:15:37.700 { 00:15:37.700 "nqn": "nqn.2016-06.io.spdk:cnode24051", 00:15:37.700 "serial_number": ".@oe8,~H~(2g*lLavq gS", 00:15:37.700 "method": "nvmf_create_subsystem", 00:15:37.700 "req_id": 1 00:15:37.700 } 00:15:37.700 Got JSON-RPC error response 00:15:37.700 response: 00:15:37.700 { 00:15:37.700 "code": -32602, 00:15:37.700 "message": "Invalid SN .@oe8,~H~(2g*lLavq gS" 00:15:37.700 }' 00:15:37.700 05:17:54 -- target/invalid.sh@55 -- # [[ request: 00:15:37.700 { 00:15:37.700 "nqn": "nqn.2016-06.io.spdk:cnode24051", 00:15:37.700 "serial_number": ".@oe8,~H~(2g*lLavq gS", 00:15:37.700 "method": "nvmf_create_subsystem", 00:15:37.700 "req_id": 1 00:15:37.700 } 00:15:37.700 Got JSON-RPC error response 00:15:37.700 response: 00:15:37.700 { 00:15:37.700 "code": -32602, 00:15:37.700 "message": "Invalid SN .@oe8,~H~(2g*lLavq gS" 00:15:37.700 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:37.700 05:17:54 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:37.700 05:17:54 -- target/invalid.sh@19 -- # local length=41 ll 00:15:37.700 05:17:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:37.700 05:17:54 -- target/invalid.sh@21 -- # local chars 00:15:37.700 05:17:54 -- target/invalid.sh@22 -- # local string 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # printf %x 74 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # string+=J 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.700 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # printf %x 36 00:15:37.700 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # string+='$' 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # printf %x 76 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # string+=L 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # printf %x 74 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # string+=J 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # printf %x 117 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # string+=u 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # printf %x 37 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:37.701 05:17:54 -- target/invalid.sh@25 -- # string+=% 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.701 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 90 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=Z 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 73 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=I 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 39 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=\' 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 50 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=2 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 117 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=u 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 88 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=X 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 116 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=t 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 38 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+='&' 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 107 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=k 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 56 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=8 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 67 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=C 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 86 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=V 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 83 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=S 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 94 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+='^' 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 74 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=J 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 127 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 61 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+== 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 76 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=L 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 117 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=u 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 107 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=k 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 48 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=0 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 113 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=q 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 51 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=3 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 53 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=5 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 93 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=']' 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 72 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=H 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 122 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=z 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 97 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=a 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 48 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=0 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 117 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=u 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.961 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # printf %x 102 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:37.961 05:17:54 -- target/invalid.sh@25 -- # string+=f 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # printf %x 71 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # string+=G 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # printf %x 112 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # string+=p 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # printf %x 81 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # string+=Q 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.962 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # printf %x 119 00:15:37.962 05:17:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:38.221 05:17:54 -- target/invalid.sh@25 -- # string+=w 00:15:38.221 05:17:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.221 05:17:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.221 05:17:54 -- target/invalid.sh@28 -- # [[ J == \- ]] 00:15:38.221 05:17:54 -- target/invalid.sh@31 -- # echo 'J$LJu%ZI'\''2uXt&k8CVS^J=Luk0q35]Hza0ufGpQw' 00:15:38.221 05:17:54 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'J$LJu%ZI'\''2uXt&k8CVS^J=Luk0q35]Hza0ufGpQw' nqn.2016-06.io.spdk:cnode14364 00:15:38.221 [2024-11-19 05:17:54.686756] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14364: invalid model number 'J$LJu%ZI'2uXt&k8CVS^J=Luk0q35]Hza0ufGpQw' 00:15:38.221 05:17:54 -- target/invalid.sh@58 -- # out='request: 00:15:38.221 { 00:15:38.221 "nqn": "nqn.2016-06.io.spdk:cnode14364", 00:15:38.221 "model_number": "J$LJu%ZI'\''2uXt&k8CVS^J\u007f=Luk0q35]Hza0ufGpQw", 00:15:38.221 "method": "nvmf_create_subsystem", 00:15:38.221 "req_id": 1 00:15:38.221 } 00:15:38.221 Got JSON-RPC error response 00:15:38.221 response: 00:15:38.221 { 00:15:38.221 "code": -32602, 00:15:38.221 "message": "Invalid MN J$LJu%ZI'\''2uXt&k8CVS^J\u007f=Luk0q35]Hza0ufGpQw" 00:15:38.221 }' 00:15:38.221 05:17:54 -- target/invalid.sh@59 -- # [[ request: 00:15:38.221 { 00:15:38.221 "nqn": "nqn.2016-06.io.spdk:cnode14364", 00:15:38.221 "model_number": "J$LJu%ZI'2uXt&k8CVS^J\u007f=Luk0q35]Hza0ufGpQw", 00:15:38.221 "method": "nvmf_create_subsystem", 00:15:38.221 "req_id": 1 00:15:38.221 } 00:15:38.221 Got JSON-RPC error response 00:15:38.221 response: 00:15:38.221 { 00:15:38.221 "code": -32602, 00:15:38.221 "message": "Invalid MN J$LJu%ZI'2uXt&k8CVS^J\u007f=Luk0q35]Hza0ufGpQw" 00:15:38.221 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:38.221 05:17:54 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:38.481 [2024-11-19 05:17:54.897119] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1399b80/0x139e070) succeed. 00:15:38.481 [2024-11-19 05:17:54.906361] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x139b170/0x13df710) succeed. 00:15:38.740 05:17:55 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:38.740 05:17:55 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:38.740 05:17:55 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:38.740 192.168.100.9' 00:15:38.740 05:17:55 -- target/invalid.sh@67 -- # head -n 1 00:15:38.740 05:17:55 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:38.740 05:17:55 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:38.999 [2024-11-19 05:17:55.420188] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:38.999 05:17:55 -- target/invalid.sh@69 -- # out='request: 00:15:38.999 { 00:15:38.999 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:38.999 "listen_address": { 00:15:38.999 "trtype": "rdma", 00:15:38.999 "traddr": "192.168.100.8", 00:15:38.999 "trsvcid": "4421" 00:15:38.999 }, 00:15:38.999 "method": "nvmf_subsystem_remove_listener", 00:15:38.999 "req_id": 1 00:15:38.999 } 00:15:38.999 Got JSON-RPC error response 00:15:38.999 response: 00:15:38.999 { 00:15:38.999 "code": -32602, 00:15:38.999 "message": "Invalid parameters" 00:15:38.999 }' 00:15:38.999 05:17:55 -- target/invalid.sh@70 -- # [[ request: 00:15:38.999 { 00:15:38.999 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:39.000 "listen_address": { 00:15:39.000 "trtype": "rdma", 00:15:39.000 "traddr": "192.168.100.8", 00:15:39.000 "trsvcid": "4421" 00:15:39.000 }, 00:15:39.000 "method": "nvmf_subsystem_remove_listener", 00:15:39.000 "req_id": 1 00:15:39.000 } 00:15:39.000 Got JSON-RPC error response 00:15:39.000 response: 00:15:39.000 { 00:15:39.000 "code": -32602, 00:15:39.000 "message": "Invalid parameters" 00:15:39.000 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:39.000 05:17:55 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5420 -i 0 00:15:39.259 [2024-11-19 05:17:55.620880] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5420: invalid cntlid range [0-65519] 00:15:39.259 05:17:55 -- target/invalid.sh@73 -- # out='request: 00:15:39.259 { 00:15:39.259 "nqn": "nqn.2016-06.io.spdk:cnode5420", 00:15:39.259 "min_cntlid": 0, 00:15:39.259 "method": "nvmf_create_subsystem", 00:15:39.259 "req_id": 1 00:15:39.259 } 00:15:39.259 Got JSON-RPC error response 00:15:39.259 response: 00:15:39.259 { 00:15:39.259 "code": -32602, 00:15:39.259 "message": "Invalid cntlid range [0-65519]" 00:15:39.259 }' 00:15:39.259 05:17:55 -- target/invalid.sh@74 -- # [[ request: 00:15:39.259 { 00:15:39.259 "nqn": "nqn.2016-06.io.spdk:cnode5420", 00:15:39.259 "min_cntlid": 0, 00:15:39.259 "method": "nvmf_create_subsystem", 00:15:39.259 "req_id": 1 00:15:39.259 } 00:15:39.259 Got JSON-RPC error response 00:15:39.259 response: 00:15:39.259 { 00:15:39.259 "code": -32602, 00:15:39.259 "message": "Invalid cntlid range [0-65519]" 00:15:39.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:39.259 05:17:55 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11433 -i 65520 00:15:39.259 [2024-11-19 05:17:55.817579] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11433: invalid cntlid range [65520-65519] 00:15:39.519 05:17:55 -- target/invalid.sh@75 -- # out='request: 00:15:39.519 { 00:15:39.519 "nqn": "nqn.2016-06.io.spdk:cnode11433", 00:15:39.519 "min_cntlid": 65520, 00:15:39.519 "method": "nvmf_create_subsystem", 00:15:39.519 "req_id": 1 00:15:39.519 } 00:15:39.519 Got JSON-RPC error response 00:15:39.519 response: 00:15:39.519 { 00:15:39.519 "code": -32602, 00:15:39.519 "message": "Invalid cntlid range [65520-65519]" 00:15:39.519 }' 00:15:39.519 05:17:55 -- target/invalid.sh@76 -- # [[ request: 00:15:39.519 { 00:15:39.519 "nqn": "nqn.2016-06.io.spdk:cnode11433", 00:15:39.519 "min_cntlid": 65520, 00:15:39.519 "method": "nvmf_create_subsystem", 00:15:39.519 "req_id": 1 00:15:39.519 } 00:15:39.519 Got JSON-RPC error response 00:15:39.519 response: 00:15:39.519 { 00:15:39.519 "code": -32602, 00:15:39.519 "message": "Invalid cntlid range [65520-65519]" 00:15:39.519 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:39.519 05:17:55 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27458 -I 0 00:15:39.519 [2024-11-19 05:17:55.994200] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27458: invalid cntlid range [1-0] 00:15:39.519 05:17:56 -- target/invalid.sh@77 -- # out='request: 00:15:39.519 { 00:15:39.519 "nqn": "nqn.2016-06.io.spdk:cnode27458", 00:15:39.519 "max_cntlid": 0, 00:15:39.519 "method": "nvmf_create_subsystem", 00:15:39.519 "req_id": 1 00:15:39.519 } 00:15:39.519 Got JSON-RPC error response 00:15:39.519 response: 00:15:39.519 { 00:15:39.519 "code": -32602, 00:15:39.519 "message": "Invalid cntlid range [1-0]" 00:15:39.519 }' 00:15:39.519 05:17:56 -- target/invalid.sh@78 -- # [[ request: 00:15:39.519 { 00:15:39.519 "nqn": "nqn.2016-06.io.spdk:cnode27458", 00:15:39.519 "max_cntlid": 0, 00:15:39.519 "method": "nvmf_create_subsystem", 00:15:39.519 "req_id": 1 00:15:39.519 } 00:15:39.519 Got JSON-RPC error response 00:15:39.519 response: 00:15:39.519 { 00:15:39.519 "code": -32602, 00:15:39.519 "message": "Invalid cntlid range [1-0]" 00:15:39.519 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:39.519 05:17:56 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4616 -I 65520 00:15:39.779 [2024-11-19 05:17:56.186924] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4616: invalid cntlid range [1-65520] 00:15:39.779 05:17:56 -- target/invalid.sh@79 -- # out='request: 00:15:39.779 { 00:15:39.779 "nqn": "nqn.2016-06.io.spdk:cnode4616", 00:15:39.779 "max_cntlid": 65520, 00:15:39.779 "method": "nvmf_create_subsystem", 00:15:39.779 "req_id": 1 00:15:39.779 } 00:15:39.779 Got JSON-RPC error response 00:15:39.779 response: 00:15:39.779 { 00:15:39.779 "code": -32602, 00:15:39.779 "message": "Invalid cntlid range [1-65520]" 00:15:39.779 }' 00:15:39.779 05:17:56 -- target/invalid.sh@80 -- # [[ request: 00:15:39.779 { 00:15:39.779 "nqn": "nqn.2016-06.io.spdk:cnode4616", 00:15:39.779 "max_cntlid": 65520, 00:15:39.779 "method": "nvmf_create_subsystem", 00:15:39.779 "req_id": 1 00:15:39.779 } 00:15:39.779 Got JSON-RPC error response 00:15:39.779 response: 00:15:39.779 { 00:15:39.779 "code": -32602, 00:15:39.779 "message": "Invalid cntlid range [1-65520]" 00:15:39.779 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:39.779 05:17:56 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5771 -i 6 -I 5 00:15:40.038 [2024-11-19 05:17:56.379610] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5771: invalid cntlid range [6-5] 00:15:40.038 05:17:56 -- target/invalid.sh@83 -- # out='request: 00:15:40.038 { 00:15:40.038 "nqn": "nqn.2016-06.io.spdk:cnode5771", 00:15:40.038 "min_cntlid": 6, 00:15:40.038 "max_cntlid": 5, 00:15:40.038 "method": "nvmf_create_subsystem", 00:15:40.038 "req_id": 1 00:15:40.038 } 00:15:40.038 Got JSON-RPC error response 00:15:40.038 response: 00:15:40.038 { 00:15:40.038 "code": -32602, 00:15:40.038 "message": "Invalid cntlid range [6-5]" 00:15:40.038 }' 00:15:40.038 05:17:56 -- target/invalid.sh@84 -- # [[ request: 00:15:40.038 { 00:15:40.038 "nqn": "nqn.2016-06.io.spdk:cnode5771", 00:15:40.038 "min_cntlid": 6, 00:15:40.038 "max_cntlid": 5, 00:15:40.038 "method": "nvmf_create_subsystem", 00:15:40.038 "req_id": 1 00:15:40.038 } 00:15:40.038 Got JSON-RPC error response 00:15:40.038 response: 00:15:40.038 { 00:15:40.038 "code": -32602, 00:15:40.038 "message": "Invalid cntlid range [6-5]" 00:15:40.038 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:40.038 05:17:56 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:40.038 05:17:56 -- target/invalid.sh@87 -- # out='request: 00:15:40.038 { 00:15:40.038 "name": "foobar", 00:15:40.038 "method": "nvmf_delete_target", 00:15:40.038 "req_id": 1 00:15:40.038 } 00:15:40.038 Got JSON-RPC error response 00:15:40.038 response: 00:15:40.038 { 00:15:40.038 "code": -32602, 00:15:40.038 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:40.038 }' 00:15:40.038 05:17:56 -- target/invalid.sh@88 -- # [[ request: 00:15:40.038 { 00:15:40.038 "name": "foobar", 00:15:40.038 "method": "nvmf_delete_target", 00:15:40.038 "req_id": 1 00:15:40.038 } 00:15:40.038 Got JSON-RPC error response 00:15:40.038 response: 00:15:40.038 { 00:15:40.038 "code": -32602, 00:15:40.038 "message": "The specified target doesn't exist, cannot delete it." 00:15:40.038 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:40.038 05:17:56 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:40.038 05:17:56 -- target/invalid.sh@91 -- # nvmftestfini 00:15:40.038 05:17:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:40.038 05:17:56 -- nvmf/common.sh@116 -- # sync 00:15:40.038 05:17:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:40.038 05:17:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:40.038 05:17:56 -- nvmf/common.sh@119 -- # set +e 00:15:40.038 05:17:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:40.038 05:17:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:40.038 rmmod nvme_rdma 00:15:40.038 rmmod nvme_fabrics 00:15:40.038 05:17:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:40.038 05:17:56 -- nvmf/common.sh@123 -- # set -e 00:15:40.039 05:17:56 -- nvmf/common.sh@124 -- # return 0 00:15:40.039 05:17:56 -- nvmf/common.sh@477 -- # '[' -n 1764480 ']' 00:15:40.039 05:17:56 -- nvmf/common.sh@478 -- # killprocess 1764480 00:15:40.039 05:17:56 -- common/autotest_common.sh@936 -- # '[' -z 1764480 ']' 00:15:40.039 05:17:56 -- common/autotest_common.sh@940 -- # kill -0 1764480 00:15:40.039 05:17:56 -- common/autotest_common.sh@941 -- # uname 00:15:40.039 05:17:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.039 05:17:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1764480 00:15:40.298 05:17:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:40.298 05:17:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:40.298 05:17:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1764480' 00:15:40.298 killing process with pid 1764480 00:15:40.298 05:17:56 -- common/autotest_common.sh@955 -- # kill 1764480 00:15:40.298 05:17:56 -- common/autotest_common.sh@960 -- # wait 1764480 00:15:40.298 05:17:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:40.298 05:17:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:40.298 00:15:40.298 real 0m11.586s 00:15:40.298 user 0m21.613s 00:15:40.298 sys 0m6.379s 00:15:40.298 05:17:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:40.298 05:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:40.298 ************************************ 00:15:40.298 END TEST nvmf_invalid 00:15:40.298 ************************************ 00:15:40.557 05:17:56 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:40.557 05:17:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:40.557 05:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.557 05:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:40.557 ************************************ 00:15:40.557 START TEST nvmf_abort 00:15:40.557 ************************************ 00:15:40.557 05:17:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:40.557 * Looking for test storage... 00:15:40.557 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:40.557 05:17:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:40.557 05:17:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:40.557 05:17:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:40.557 05:17:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:40.557 05:17:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:40.557 05:17:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:40.557 05:17:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:40.557 05:17:57 -- scripts/common.sh@335 -- # IFS=.-: 00:15:40.557 05:17:57 -- scripts/common.sh@335 -- # read -ra ver1 00:15:40.557 05:17:57 -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.557 05:17:57 -- scripts/common.sh@336 -- # read -ra ver2 00:15:40.557 05:17:57 -- scripts/common.sh@337 -- # local 'op=<' 00:15:40.557 05:17:57 -- scripts/common.sh@339 -- # ver1_l=2 00:15:40.557 05:17:57 -- scripts/common.sh@340 -- # ver2_l=1 00:15:40.557 05:17:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:40.557 05:17:57 -- scripts/common.sh@343 -- # case "$op" in 00:15:40.557 05:17:57 -- scripts/common.sh@344 -- # : 1 00:15:40.557 05:17:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:40.557 05:17:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.557 05:17:57 -- scripts/common.sh@364 -- # decimal 1 00:15:40.557 05:17:57 -- scripts/common.sh@352 -- # local d=1 00:15:40.557 05:17:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.557 05:17:57 -- scripts/common.sh@354 -- # echo 1 00:15:40.557 05:17:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:40.557 05:17:57 -- scripts/common.sh@365 -- # decimal 2 00:15:40.557 05:17:57 -- scripts/common.sh@352 -- # local d=2 00:15:40.557 05:17:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.557 05:17:57 -- scripts/common.sh@354 -- # echo 2 00:15:40.557 05:17:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:40.557 05:17:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:40.558 05:17:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:40.558 05:17:57 -- scripts/common.sh@367 -- # return 0 00:15:40.558 05:17:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.558 05:17:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.558 --rc genhtml_branch_coverage=1 00:15:40.558 --rc genhtml_function_coverage=1 00:15:40.558 --rc genhtml_legend=1 00:15:40.558 --rc geninfo_all_blocks=1 00:15:40.558 --rc geninfo_unexecuted_blocks=1 00:15:40.558 00:15:40.558 ' 00:15:40.558 05:17:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.558 --rc genhtml_branch_coverage=1 00:15:40.558 --rc genhtml_function_coverage=1 00:15:40.558 --rc genhtml_legend=1 00:15:40.558 --rc geninfo_all_blocks=1 00:15:40.558 --rc geninfo_unexecuted_blocks=1 00:15:40.558 00:15:40.558 ' 00:15:40.558 05:17:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.558 --rc genhtml_branch_coverage=1 00:15:40.558 --rc genhtml_function_coverage=1 00:15:40.558 --rc genhtml_legend=1 00:15:40.558 --rc geninfo_all_blocks=1 00:15:40.558 --rc geninfo_unexecuted_blocks=1 00:15:40.558 00:15:40.558 ' 00:15:40.558 05:17:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.558 --rc genhtml_branch_coverage=1 00:15:40.558 --rc genhtml_function_coverage=1 00:15:40.558 --rc genhtml_legend=1 00:15:40.558 --rc geninfo_all_blocks=1 00:15:40.558 --rc geninfo_unexecuted_blocks=1 00:15:40.558 00:15:40.558 ' 00:15:40.558 05:17:57 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.558 05:17:57 -- nvmf/common.sh@7 -- # uname -s 00:15:40.558 05:17:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.558 05:17:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.558 05:17:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.558 05:17:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.558 05:17:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.558 05:17:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.558 05:17:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.558 05:17:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.558 05:17:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.558 05:17:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.558 05:17:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:40.558 05:17:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:40.558 05:17:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.558 05:17:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.558 05:17:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.558 05:17:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:40.558 05:17:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.558 05:17:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.558 05:17:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.558 05:17:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.558 05:17:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.558 05:17:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.558 05:17:57 -- paths/export.sh@5 -- # export PATH 00:15:40.558 05:17:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.558 05:17:57 -- nvmf/common.sh@46 -- # : 0 00:15:40.558 05:17:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:40.558 05:17:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:40.558 05:17:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:40.558 05:17:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.817 05:17:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.817 05:17:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:40.817 05:17:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:40.817 05:17:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:40.817 05:17:57 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.817 05:17:57 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:40.817 05:17:57 -- target/abort.sh@14 -- # nvmftestinit 00:15:40.817 05:17:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:40.817 05:17:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.817 05:17:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:40.817 05:17:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:40.817 05:17:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:40.817 05:17:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.817 05:17:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.818 05:17:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.818 05:17:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:40.818 05:17:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:40.818 05:17:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:40.818 05:17:57 -- common/autotest_common.sh@10 -- # set +x 00:15:47.392 05:18:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:47.392 05:18:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:47.392 05:18:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:47.392 05:18:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:47.392 05:18:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:47.392 05:18:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:47.392 05:18:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:47.392 05:18:03 -- nvmf/common.sh@294 -- # net_devs=() 00:15:47.392 05:18:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:47.392 05:18:03 -- nvmf/common.sh@295 -- # e810=() 00:15:47.392 05:18:03 -- nvmf/common.sh@295 -- # local -ga e810 00:15:47.392 05:18:03 -- nvmf/common.sh@296 -- # x722=() 00:15:47.392 05:18:03 -- nvmf/common.sh@296 -- # local -ga x722 00:15:47.392 05:18:03 -- nvmf/common.sh@297 -- # mlx=() 00:15:47.392 05:18:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:47.392 05:18:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.392 05:18:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:47.392 05:18:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:47.392 05:18:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:47.392 05:18:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:47.392 05:18:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:47.392 05:18:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:47.392 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:47.392 05:18:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:47.392 05:18:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:47.392 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:47.392 05:18:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:47.392 05:18:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:47.392 05:18:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.392 05:18:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:47.392 05:18:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.392 05:18:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:47.392 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:47.392 05:18:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.392 05:18:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.392 05:18:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:47.392 05:18:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.392 05:18:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:47.392 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:47.392 05:18:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.392 05:18:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:47.392 05:18:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:47.392 05:18:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:47.392 05:18:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:47.392 05:18:03 -- nvmf/common.sh@57 -- # uname 00:15:47.392 05:18:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:47.392 05:18:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:47.392 05:18:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:47.392 05:18:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:47.392 05:18:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:47.392 05:18:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:47.392 05:18:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:47.392 05:18:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:47.392 05:18:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:47.392 05:18:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:47.392 05:18:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:47.392 05:18:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:47.392 05:18:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:47.392 05:18:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:47.392 05:18:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:47.392 05:18:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:47.392 05:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:47.392 05:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:47.392 05:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.392 05:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:47.392 05:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:47.392 05:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:47.392 05:18:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:47.392 05:18:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:47.392 05:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:47.392 05:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:47.392 05:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:47.393 05:18:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:47.393 05:18:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:47.393 05:18:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:47.393 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:47.393 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:47.393 altname enp217s0f0np0 00:15:47.393 altname ens818f0np0 00:15:47.393 inet 192.168.100.8/24 scope global mlx_0_0 00:15:47.393 valid_lft forever preferred_lft forever 00:15:47.393 05:18:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:47.393 05:18:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:47.393 05:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:47.393 05:18:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:47.393 05:18:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:47.393 05:18:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:47.393 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:47.393 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:47.393 altname enp217s0f1np1 00:15:47.393 altname ens818f1np1 00:15:47.393 inet 192.168.100.9/24 scope global mlx_0_1 00:15:47.393 valid_lft forever preferred_lft forever 00:15:47.393 05:18:03 -- nvmf/common.sh@410 -- # return 0 00:15:47.393 05:18:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:47.393 05:18:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:47.393 05:18:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:47.393 05:18:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:47.393 05:18:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:47.393 05:18:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:47.393 05:18:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:47.393 05:18:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:47.393 05:18:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:47.393 05:18:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:47.393 05:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:47.393 05:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.393 05:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:47.393 05:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:47.393 05:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:47.393 05:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:47.393 05:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.393 05:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:47.393 05:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.393 05:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:47.393 05:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:47.393 05:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:47.393 05:18:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:47.393 05:18:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:47.393 05:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:47.393 05:18:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:47.393 05:18:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:47.393 05:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:47.393 05:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:47.393 05:18:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:47.393 192.168.100.9' 00:15:47.393 05:18:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:47.393 192.168.100.9' 00:15:47.393 05:18:03 -- nvmf/common.sh@445 -- # head -n 1 00:15:47.393 05:18:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:47.393 05:18:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:47.393 192.168.100.9' 00:15:47.393 05:18:03 -- nvmf/common.sh@446 -- # tail -n +2 00:15:47.393 05:18:03 -- nvmf/common.sh@446 -- # head -n 1 00:15:47.393 05:18:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:47.393 05:18:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:47.393 05:18:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:47.393 05:18:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:47.393 05:18:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:47.393 05:18:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:47.393 05:18:03 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:47.393 05:18:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:47.393 05:18:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.393 05:18:03 -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 05:18:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:47.393 05:18:03 -- nvmf/common.sh@469 -- # nvmfpid=1768890 00:15:47.393 05:18:03 -- nvmf/common.sh@470 -- # waitforlisten 1768890 00:15:47.393 05:18:03 -- common/autotest_common.sh@829 -- # '[' -z 1768890 ']' 00:15:47.393 05:18:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.393 05:18:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.393 05:18:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.393 05:18:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.393 05:18:03 -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 [2024-11-19 05:18:03.810203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:47.393 [2024-11-19 05:18:03.810257] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.393 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.393 [2024-11-19 05:18:03.881360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.393 [2024-11-19 05:18:03.918149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:47.393 [2024-11-19 05:18:03.918276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.393 [2024-11-19 05:18:03.918286] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.393 [2024-11-19 05:18:03.918296] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.393 [2024-11-19 05:18:03.918415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.393 [2024-11-19 05:18:03.918499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.393 [2024-11-19 05:18:03.918500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.331 05:18:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.331 05:18:04 -- common/autotest_common.sh@862 -- # return 0 00:15:48.331 05:18:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:48.331 05:18:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 05:18:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.331 05:18:04 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:48.331 05:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 [2024-11-19 05:18:04.701970] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8e49c0/0x8e8eb0) succeed. 00:15:48.331 [2024-11-19 05:18:04.710964] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8e5f10/0x92a550) succeed. 00:15:48.331 05:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.331 05:18:04 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:48.331 05:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 Malloc0 00:15:48.331 05:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.331 05:18:04 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:48.331 05:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 Delay0 00:15:48.331 05:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.331 05:18:04 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:48.331 05:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 05:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.331 05:18:04 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:48.331 05:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 05:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.331 05:18:04 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:48.331 05:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 [2024-11-19 05:18:04.869129] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:48.331 05:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.331 05:18:04 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:48.331 05:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 05:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 05:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.331 05:18:04 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:48.619 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.619 [2024-11-19 05:18:04.962341] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:50.527 Initializing NVMe Controllers 00:15:50.527 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:50.527 controller IO queue size 128 less than required 00:15:50.527 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:50.527 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:50.527 Initialization complete. Launching workers. 00:15:50.527 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 49094 00:15:50.527 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 49155, failed to submit 62 00:15:50.527 success 49094, unsuccess 61, failed 0 00:15:50.527 05:18:07 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:50.527 05:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.527 05:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:50.527 05:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.527 05:18:07 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:50.527 05:18:07 -- target/abort.sh@38 -- # nvmftestfini 00:15:50.527 05:18:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.527 05:18:07 -- nvmf/common.sh@116 -- # sync 00:15:50.527 05:18:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:50.527 05:18:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:50.527 05:18:07 -- nvmf/common.sh@119 -- # set +e 00:15:50.527 05:18:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.527 05:18:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:50.786 rmmod nvme_rdma 00:15:50.786 rmmod nvme_fabrics 00:15:50.786 05:18:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.786 05:18:07 -- nvmf/common.sh@123 -- # set -e 00:15:50.786 05:18:07 -- nvmf/common.sh@124 -- # return 0 00:15:50.786 05:18:07 -- nvmf/common.sh@477 -- # '[' -n 1768890 ']' 00:15:50.787 05:18:07 -- nvmf/common.sh@478 -- # killprocess 1768890 00:15:50.787 05:18:07 -- common/autotest_common.sh@936 -- # '[' -z 1768890 ']' 00:15:50.787 05:18:07 -- common/autotest_common.sh@940 -- # kill -0 1768890 00:15:50.787 05:18:07 -- common/autotest_common.sh@941 -- # uname 00:15:50.787 05:18:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.787 05:18:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1768890 00:15:50.787 05:18:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:50.787 05:18:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:50.787 05:18:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1768890' 00:15:50.787 killing process with pid 1768890 00:15:50.787 05:18:07 -- common/autotest_common.sh@955 -- # kill 1768890 00:15:50.787 05:18:07 -- common/autotest_common.sh@960 -- # wait 1768890 00:15:51.046 05:18:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:51.046 05:18:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:51.046 00:15:51.046 real 0m10.564s 00:15:51.046 user 0m14.645s 00:15:51.046 sys 0m5.604s 00:15:51.046 05:18:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.046 05:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:51.046 ************************************ 00:15:51.046 END TEST nvmf_abort 00:15:51.046 ************************************ 00:15:51.046 05:18:07 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:51.046 05:18:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.046 05:18:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.046 05:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:51.046 ************************************ 00:15:51.046 START TEST nvmf_ns_hotplug_stress 00:15:51.046 ************************************ 00:15:51.046 05:18:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:51.047 * Looking for test storage... 00:15:51.307 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:51.307 05:18:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:51.307 05:18:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:51.307 05:18:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:51.307 05:18:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:51.307 05:18:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:51.307 05:18:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:51.307 05:18:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:51.307 05:18:07 -- scripts/common.sh@335 -- # IFS=.-: 00:15:51.307 05:18:07 -- scripts/common.sh@335 -- # read -ra ver1 00:15:51.307 05:18:07 -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.307 05:18:07 -- scripts/common.sh@336 -- # read -ra ver2 00:15:51.307 05:18:07 -- scripts/common.sh@337 -- # local 'op=<' 00:15:51.307 05:18:07 -- scripts/common.sh@339 -- # ver1_l=2 00:15:51.307 05:18:07 -- scripts/common.sh@340 -- # ver2_l=1 00:15:51.307 05:18:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:51.307 05:18:07 -- scripts/common.sh@343 -- # case "$op" in 00:15:51.307 05:18:07 -- scripts/common.sh@344 -- # : 1 00:15:51.307 05:18:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:51.307 05:18:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.307 05:18:07 -- scripts/common.sh@364 -- # decimal 1 00:15:51.307 05:18:07 -- scripts/common.sh@352 -- # local d=1 00:15:51.307 05:18:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.307 05:18:07 -- scripts/common.sh@354 -- # echo 1 00:15:51.307 05:18:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:51.307 05:18:07 -- scripts/common.sh@365 -- # decimal 2 00:15:51.307 05:18:07 -- scripts/common.sh@352 -- # local d=2 00:15:51.307 05:18:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.307 05:18:07 -- scripts/common.sh@354 -- # echo 2 00:15:51.307 05:18:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:51.307 05:18:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:51.307 05:18:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:51.307 05:18:07 -- scripts/common.sh@367 -- # return 0 00:15:51.307 05:18:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.307 05:18:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:51.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.307 --rc genhtml_branch_coverage=1 00:15:51.307 --rc genhtml_function_coverage=1 00:15:51.307 --rc genhtml_legend=1 00:15:51.307 --rc geninfo_all_blocks=1 00:15:51.307 --rc geninfo_unexecuted_blocks=1 00:15:51.307 00:15:51.307 ' 00:15:51.307 05:18:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:51.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.307 --rc genhtml_branch_coverage=1 00:15:51.307 --rc genhtml_function_coverage=1 00:15:51.307 --rc genhtml_legend=1 00:15:51.307 --rc geninfo_all_blocks=1 00:15:51.307 --rc geninfo_unexecuted_blocks=1 00:15:51.307 00:15:51.307 ' 00:15:51.307 05:18:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:51.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.307 --rc genhtml_branch_coverage=1 00:15:51.307 --rc genhtml_function_coverage=1 00:15:51.307 --rc genhtml_legend=1 00:15:51.307 --rc geninfo_all_blocks=1 00:15:51.307 --rc geninfo_unexecuted_blocks=1 00:15:51.307 00:15:51.307 ' 00:15:51.307 05:18:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:51.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.307 --rc genhtml_branch_coverage=1 00:15:51.307 --rc genhtml_function_coverage=1 00:15:51.307 --rc genhtml_legend=1 00:15:51.307 --rc geninfo_all_blocks=1 00:15:51.307 --rc geninfo_unexecuted_blocks=1 00:15:51.307 00:15:51.307 ' 00:15:51.307 05:18:07 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.307 05:18:07 -- nvmf/common.sh@7 -- # uname -s 00:15:51.307 05:18:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.307 05:18:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.307 05:18:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.307 05:18:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.307 05:18:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.307 05:18:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.307 05:18:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.307 05:18:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.307 05:18:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.307 05:18:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.307 05:18:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:51.307 05:18:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:51.307 05:18:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.307 05:18:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.307 05:18:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.307 05:18:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:51.307 05:18:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.307 05:18:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.307 05:18:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.307 05:18:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.307 05:18:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.307 05:18:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.307 05:18:07 -- paths/export.sh@5 -- # export PATH 00:15:51.307 05:18:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.307 05:18:07 -- nvmf/common.sh@46 -- # : 0 00:15:51.307 05:18:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.307 05:18:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.307 05:18:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.307 05:18:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.307 05:18:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.307 05:18:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.307 05:18:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.307 05:18:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.307 05:18:07 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:51.307 05:18:07 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:51.307 05:18:07 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:51.307 05:18:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.307 05:18:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.307 05:18:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.307 05:18:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.307 05:18:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.307 05:18:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.307 05:18:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.307 05:18:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:51.308 05:18:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:51.308 05:18:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:51.308 05:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:57.881 05:18:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:57.881 05:18:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:57.881 05:18:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:57.881 05:18:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:57.881 05:18:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:57.881 05:18:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:57.881 05:18:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:57.881 05:18:13 -- nvmf/common.sh@294 -- # net_devs=() 00:15:57.881 05:18:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:57.881 05:18:13 -- nvmf/common.sh@295 -- # e810=() 00:15:57.881 05:18:13 -- nvmf/common.sh@295 -- # local -ga e810 00:15:57.881 05:18:13 -- nvmf/common.sh@296 -- # x722=() 00:15:57.881 05:18:13 -- nvmf/common.sh@296 -- # local -ga x722 00:15:57.881 05:18:13 -- nvmf/common.sh@297 -- # mlx=() 00:15:57.881 05:18:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:57.881 05:18:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.881 05:18:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:57.882 05:18:13 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:57.882 05:18:13 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:57.882 05:18:13 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:57.882 05:18:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:57.882 05:18:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:57.882 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:57.882 05:18:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:57.882 05:18:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:57.882 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:57.882 05:18:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:57.882 05:18:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:57.882 05:18:13 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.882 05:18:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:57.882 05:18:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.882 05:18:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:57.882 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.882 05:18:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.882 05:18:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:57.882 05:18:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.882 05:18:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:57.882 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.882 05:18:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:57.882 05:18:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:57.882 05:18:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:57.882 05:18:13 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:57.882 05:18:13 -- nvmf/common.sh@57 -- # uname 00:15:57.882 05:18:13 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:57.882 05:18:13 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:57.882 05:18:13 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:57.882 05:18:13 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:57.882 05:18:13 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:57.882 05:18:13 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:57.882 05:18:13 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:57.882 05:18:13 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:57.882 05:18:13 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:57.882 05:18:13 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:57.882 05:18:13 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:57.882 05:18:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:57.882 05:18:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:57.882 05:18:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:57.882 05:18:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:57.882 05:18:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:57.882 05:18:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@104 -- # continue 2 00:15:57.882 05:18:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@104 -- # continue 2 00:15:57.882 05:18:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:57.882 05:18:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:57.882 05:18:13 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:57.882 05:18:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:57.882 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:57.882 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:57.882 altname enp217s0f0np0 00:15:57.882 altname ens818f0np0 00:15:57.882 inet 192.168.100.8/24 scope global mlx_0_0 00:15:57.882 valid_lft forever preferred_lft forever 00:15:57.882 05:18:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:57.882 05:18:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:57.882 05:18:13 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:57.882 05:18:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:57.882 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:57.882 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:57.882 altname enp217s0f1np1 00:15:57.882 altname ens818f1np1 00:15:57.882 inet 192.168.100.9/24 scope global mlx_0_1 00:15:57.882 valid_lft forever preferred_lft forever 00:15:57.882 05:18:13 -- nvmf/common.sh@410 -- # return 0 00:15:57.882 05:18:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:57.882 05:18:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:57.882 05:18:13 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:57.882 05:18:13 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:57.882 05:18:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:57.882 05:18:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:57.882 05:18:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:57.882 05:18:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:57.882 05:18:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:57.882 05:18:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@104 -- # continue 2 00:15:57.882 05:18:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.882 05:18:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:57.882 05:18:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@104 -- # continue 2 00:15:57.882 05:18:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:57.882 05:18:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:57.882 05:18:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:57.882 05:18:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:57.882 05:18:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:57.882 05:18:13 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:57.882 192.168.100.9' 00:15:57.882 05:18:13 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:57.882 192.168.100.9' 00:15:57.882 05:18:13 -- nvmf/common.sh@445 -- # head -n 1 00:15:57.882 05:18:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:57.882 05:18:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:57.882 192.168.100.9' 00:15:57.882 05:18:14 -- nvmf/common.sh@446 -- # tail -n +2 00:15:57.882 05:18:14 -- nvmf/common.sh@446 -- # head -n 1 00:15:57.882 05:18:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:57.882 05:18:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:57.882 05:18:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:57.882 05:18:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:57.882 05:18:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:57.882 05:18:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:57.882 05:18:14 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:57.882 05:18:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:57.883 05:18:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.883 05:18:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.883 05:18:14 -- nvmf/common.sh@469 -- # nvmfpid=1773189 00:15:57.883 05:18:14 -- nvmf/common.sh@470 -- # waitforlisten 1773189 00:15:57.883 05:18:14 -- common/autotest_common.sh@829 -- # '[' -z 1773189 ']' 00:15:57.883 05:18:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.883 05:18:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.883 05:18:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.883 05:18:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.883 05:18:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.883 05:18:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:57.883 [2024-11-19 05:18:14.089860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:57.883 [2024-11-19 05:18:14.089909] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.883 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.883 [2024-11-19 05:18:14.160289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.883 [2024-11-19 05:18:14.197669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:57.883 [2024-11-19 05:18:14.197799] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.883 [2024-11-19 05:18:14.197809] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.883 [2024-11-19 05:18:14.197818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.883 [2024-11-19 05:18:14.197919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.883 [2024-11-19 05:18:14.198002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.883 [2024-11-19 05:18:14.198003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.452 05:18:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.452 05:18:14 -- common/autotest_common.sh@862 -- # return 0 00:15:58.452 05:18:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.452 05:18:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.452 05:18:14 -- common/autotest_common.sh@10 -- # set +x 00:15:58.452 05:18:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.452 05:18:14 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:58.452 05:18:14 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:58.711 [2024-11-19 05:18:15.126904] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7c69c0/0x7caeb0) succeed. 00:15:58.711 [2024-11-19 05:18:15.136049] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7c7f10/0x80c550) succeed. 00:15:58.711 05:18:15 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:58.970 05:18:15 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:59.229 [2024-11-19 05:18:15.618129] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:59.229 05:18:15 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:59.488 05:18:15 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:59.488 Malloc0 00:15:59.488 05:18:16 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:59.747 Delay0 00:15:59.747 05:18:16 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.007 05:18:16 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:00.007 NULL1 00:16:00.266 05:18:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:00.266 05:18:16 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:00.266 05:18:16 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1773640 00:16:00.266 05:18:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:00.266 05:18:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.266 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.643 Read completed with error (sct=0, sc=11) 00:16:01.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.643 05:18:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.643 05:18:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:01.643 05:18:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:01.903 true 00:16:01.903 05:18:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:01.903 05:18:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 05:18:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.901 05:18:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:02.901 05:18:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:03.188 true 00:16:03.188 05:18:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:03.188 05:18:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.759 05:18:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:04.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.016 05:18:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:04.016 05:18:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:04.274 true 00:16:04.274 05:18:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:04.274 05:18:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.212 05:18:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:05.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.212 05:18:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:05.212 05:18:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:05.472 true 00:16:05.472 05:18:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:05.472 05:18:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 05:18:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.410 05:18:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:06.410 05:18:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:06.670 true 00:16:06.670 05:18:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:06.670 05:18:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.607 05:18:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:07.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.607 05:18:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:07.607 05:18:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:07.866 true 00:16:07.866 05:18:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:07.866 05:18:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 05:18:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.804 05:18:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:08.804 05:18:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:09.063 true 00:16:09.063 05:18:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:09.063 05:18:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 05:18:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.001 05:18:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:10.001 05:18:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:10.260 true 00:16:10.260 05:18:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:10.260 05:18:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 05:18:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.197 05:18:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:11.197 05:18:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:11.456 true 00:16:11.456 05:18:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:11.456 05:18:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.394 05:18:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.394 05:18:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:12.394 05:18:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:12.653 true 00:16:12.653 05:18:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:12.653 05:18:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.588 05:18:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.588 05:18:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:13.588 05:18:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:13.847 true 00:16:13.847 05:18:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:13.847 05:18:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.785 05:18:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.044 05:18:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:15.044 05:18:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:15.044 true 00:16:15.044 05:18:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:15.044 05:18:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.982 05:18:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.240 05:18:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:16.240 05:18:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:16.240 true 00:16:16.240 05:18:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:16.240 05:18:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.177 05:18:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.436 05:18:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:17.436 05:18:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:17.436 true 00:16:17.436 05:18:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:17.436 05:18:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 05:18:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.373 05:18:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:18.373 05:18:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:18.632 true 00:16:18.632 05:18:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:18.632 05:18:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.569 05:18:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.828 05:18:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:19.828 05:18:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:19.828 true 00:16:19.828 05:18:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:19.828 05:18:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.765 05:18:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.024 05:18:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:21.024 05:18:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:21.283 true 00:16:21.283 05:18:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:21.283 05:18:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 05:18:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.220 05:18:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:22.220 05:18:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:22.479 true 00:16:22.479 05:18:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:22.479 05:18:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.417 05:18:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.417 05:18:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:23.417 05:18:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:23.676 true 00:16:23.676 05:18:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:23.676 05:18:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 05:18:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.614 05:18:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:24.614 05:18:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:24.614 true 00:16:24.873 05:18:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:24.873 05:18:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.811 05:18:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.811 05:18:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:25.811 05:18:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:26.070 true 00:16:26.070 05:18:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:26.070 05:18:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 05:18:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.007 05:18:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:27.007 05:18:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:27.266 true 00:16:27.266 05:18:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:27.266 05:18:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.204 05:18:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.204 05:18:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:28.204 05:18:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:28.463 true 00:16:28.463 05:18:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:28.463 05:18:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 05:18:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.398 05:18:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:29.398 05:18:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:29.657 true 00:16:29.657 05:18:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:29.657 05:18:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.595 05:18:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.595 05:18:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:30.595 05:18:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:30.855 true 00:16:30.855 05:18:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:30.855 05:18:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.855 05:18:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.114 05:18:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:31.114 05:18:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:31.374 true 00:16:31.374 05:18:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:31.374 05:18:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.633 05:18:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.633 05:18:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:31.633 05:18:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:31.892 true 00:16:31.892 05:18:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:31.892 05:18:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.152 05:18:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.411 05:18:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:32.411 05:18:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:32.411 true 00:16:32.411 05:18:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:32.411 05:18:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.670 Initializing NVMe Controllers 00:16:32.670 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:32.670 Controller IO queue size 128, less than required. 00:16:32.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:32.670 Controller IO queue size 128, less than required. 00:16:32.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:32.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:32.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:32.670 Initialization complete. Launching workers. 00:16:32.670 ======================================================== 00:16:32.670 Latency(us) 00:16:32.670 Device Information : IOPS MiB/s Average min max 00:16:32.670 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6169.73 3.01 18199.50 794.42 1132419.64 00:16:32.670 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35691.50 17.43 3586.22 1497.11 279075.80 00:16:32.670 ======================================================== 00:16:32.670 Total : 41861.23 20.44 5740.01 794.42 1132419.64 00:16:32.670 00:16:32.670 05:18:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.929 05:18:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:32.929 05:18:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:32.929 true 00:16:32.929 05:18:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1773640 00:16:32.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1773640) - No such process 00:16:32.929 05:18:49 -- target/ns_hotplug_stress.sh@53 -- # wait 1773640 00:16:32.929 05:18:49 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.188 05:18:49 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:33.447 05:18:49 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:33.447 05:18:49 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:33.447 05:18:49 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:33.447 05:18:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:33.447 05:18:49 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:33.447 null0 00:16:33.706 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:33.706 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:33.706 05:18:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:33.706 null1 00:16:33.706 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:33.706 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:33.706 05:18:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:33.965 null2 00:16:33.965 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:33.965 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:33.965 05:18:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:34.224 null3 00:16:34.224 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:34.224 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:34.224 05:18:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:34.224 null4 00:16:34.224 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:34.224 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:34.224 05:18:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:34.483 null5 00:16:34.483 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:34.483 05:18:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:34.483 05:18:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:34.742 null6 00:16:34.742 05:18:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:34.742 05:18:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:34.742 05:18:51 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:35.001 null7 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@66 -- # wait 1779786 1779788 1779791 1779794 1779797 1779800 1779802 1779803 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:35.001 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:35.002 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:35.002 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:35.002 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.260 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:35.261 05:18:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:35.554 05:18:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:35.903 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.163 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.163 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.163 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:36.163 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:36.164 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.423 05:18:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.683 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:36.943 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.203 05:18:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:37.461 05:18:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:37.721 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:37.722 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.981 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:38.241 05:18:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.501 05:18:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.501 05:18:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:38.760 05:18:55 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:38.760 05:18:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:38.760 05:18:55 -- nvmf/common.sh@116 -- # sync 00:16:38.760 05:18:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:38.760 05:18:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:38.760 05:18:55 -- nvmf/common.sh@119 -- # set +e 00:16:38.760 05:18:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:38.760 05:18:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:38.760 rmmod nvme_rdma 00:16:38.760 rmmod nvme_fabrics 00:16:38.760 05:18:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:38.760 05:18:55 -- nvmf/common.sh@123 -- # set -e 00:16:38.760 05:18:55 -- nvmf/common.sh@124 -- # return 0 00:16:38.760 05:18:55 -- nvmf/common.sh@477 -- # '[' -n 1773189 ']' 00:16:38.760 05:18:55 -- nvmf/common.sh@478 -- # killprocess 1773189 00:16:38.760 05:18:55 -- common/autotest_common.sh@936 -- # '[' -z 1773189 ']' 00:16:38.760 05:18:55 -- common/autotest_common.sh@940 -- # kill -0 1773189 00:16:38.760 05:18:55 -- common/autotest_common.sh@941 -- # uname 00:16:38.760 05:18:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.760 05:18:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1773189 00:16:39.019 05:18:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:39.019 05:18:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:39.019 05:18:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1773189' 00:16:39.019 killing process with pid 1773189 00:16:39.019 05:18:55 -- common/autotest_common.sh@955 -- # kill 1773189 00:16:39.019 05:18:55 -- common/autotest_common.sh@960 -- # wait 1773189 00:16:39.280 05:18:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.280 05:18:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:39.280 00:16:39.280 real 0m48.100s 00:16:39.280 user 3m18.352s 00:16:39.280 sys 0m13.448s 00:16:39.280 05:18:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:39.280 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.280 ************************************ 00:16:39.280 END TEST nvmf_ns_hotplug_stress 00:16:39.280 ************************************ 00:16:39.280 05:18:55 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:39.280 05:18:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:39.280 05:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.280 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.280 ************************************ 00:16:39.280 START TEST nvmf_connect_stress 00:16:39.280 ************************************ 00:16:39.280 05:18:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:39.280 * Looking for test storage... 00:16:39.280 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:39.280 05:18:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:39.280 05:18:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:39.280 05:18:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:39.280 05:18:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:39.280 05:18:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:39.280 05:18:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:39.280 05:18:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:39.280 05:18:55 -- scripts/common.sh@335 -- # IFS=.-: 00:16:39.280 05:18:55 -- scripts/common.sh@335 -- # read -ra ver1 00:16:39.280 05:18:55 -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.280 05:18:55 -- scripts/common.sh@336 -- # read -ra ver2 00:16:39.280 05:18:55 -- scripts/common.sh@337 -- # local 'op=<' 00:16:39.280 05:18:55 -- scripts/common.sh@339 -- # ver1_l=2 00:16:39.280 05:18:55 -- scripts/common.sh@340 -- # ver2_l=1 00:16:39.280 05:18:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:39.280 05:18:55 -- scripts/common.sh@343 -- # case "$op" in 00:16:39.280 05:18:55 -- scripts/common.sh@344 -- # : 1 00:16:39.280 05:18:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:39.280 05:18:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.280 05:18:55 -- scripts/common.sh@364 -- # decimal 1 00:16:39.280 05:18:55 -- scripts/common.sh@352 -- # local d=1 00:16:39.280 05:18:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.280 05:18:55 -- scripts/common.sh@354 -- # echo 1 00:16:39.280 05:18:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:39.280 05:18:55 -- scripts/common.sh@365 -- # decimal 2 00:16:39.280 05:18:55 -- scripts/common.sh@352 -- # local d=2 00:16:39.280 05:18:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.280 05:18:55 -- scripts/common.sh@354 -- # echo 2 00:16:39.280 05:18:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:39.280 05:18:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:39.280 05:18:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:39.280 05:18:55 -- scripts/common.sh@367 -- # return 0 00:16:39.280 05:18:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.280 05:18:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.280 --rc genhtml_branch_coverage=1 00:16:39.280 --rc genhtml_function_coverage=1 00:16:39.280 --rc genhtml_legend=1 00:16:39.280 --rc geninfo_all_blocks=1 00:16:39.280 --rc geninfo_unexecuted_blocks=1 00:16:39.280 00:16:39.280 ' 00:16:39.280 05:18:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.280 --rc genhtml_branch_coverage=1 00:16:39.280 --rc genhtml_function_coverage=1 00:16:39.280 --rc genhtml_legend=1 00:16:39.280 --rc geninfo_all_blocks=1 00:16:39.280 --rc geninfo_unexecuted_blocks=1 00:16:39.280 00:16:39.280 ' 00:16:39.280 05:18:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.280 --rc genhtml_branch_coverage=1 00:16:39.280 --rc genhtml_function_coverage=1 00:16:39.280 --rc genhtml_legend=1 00:16:39.281 --rc geninfo_all_blocks=1 00:16:39.281 --rc geninfo_unexecuted_blocks=1 00:16:39.281 00:16:39.281 ' 00:16:39.281 05:18:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:39.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.281 --rc genhtml_branch_coverage=1 00:16:39.281 --rc genhtml_function_coverage=1 00:16:39.281 --rc genhtml_legend=1 00:16:39.281 --rc geninfo_all_blocks=1 00:16:39.281 --rc geninfo_unexecuted_blocks=1 00:16:39.281 00:16:39.281 ' 00:16:39.281 05:18:55 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.281 05:18:55 -- nvmf/common.sh@7 -- # uname -s 00:16:39.281 05:18:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.281 05:18:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.281 05:18:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.281 05:18:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.281 05:18:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.281 05:18:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.281 05:18:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.281 05:18:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.281 05:18:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.281 05:18:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.281 05:18:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:39.281 05:18:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:39.281 05:18:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.281 05:18:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.281 05:18:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.281 05:18:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:39.281 05:18:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.281 05:18:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.281 05:18:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.281 05:18:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.281 05:18:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.281 05:18:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.281 05:18:55 -- paths/export.sh@5 -- # export PATH 00:16:39.281 05:18:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.281 05:18:55 -- nvmf/common.sh@46 -- # : 0 00:16:39.281 05:18:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.281 05:18:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.281 05:18:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.281 05:18:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.281 05:18:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.281 05:18:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.281 05:18:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.281 05:18:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.281 05:18:55 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:39.281 05:18:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:39.281 05:18:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.281 05:18:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.281 05:18:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.281 05:18:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.281 05:18:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.281 05:18:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.281 05:18:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.541 05:18:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.541 05:18:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.541 05:18:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.541 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.121 05:19:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.121 05:19:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:46.121 05:19:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:46.121 05:19:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:46.121 05:19:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:46.121 05:19:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:46.121 05:19:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:46.121 05:19:02 -- nvmf/common.sh@294 -- # net_devs=() 00:16:46.121 05:19:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:46.121 05:19:02 -- nvmf/common.sh@295 -- # e810=() 00:16:46.121 05:19:02 -- nvmf/common.sh@295 -- # local -ga e810 00:16:46.121 05:19:02 -- nvmf/common.sh@296 -- # x722=() 00:16:46.121 05:19:02 -- nvmf/common.sh@296 -- # local -ga x722 00:16:46.121 05:19:02 -- nvmf/common.sh@297 -- # mlx=() 00:16:46.121 05:19:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:46.121 05:19:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.121 05:19:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:46.121 05:19:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:46.121 05:19:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:46.121 05:19:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:46.121 05:19:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:46.121 05:19:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:46.121 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:46.121 05:19:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.121 05:19:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:46.121 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:46.121 05:19:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.121 05:19:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:46.121 05:19:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.121 05:19:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:46.121 05:19:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.121 05:19:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:46.121 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:46.121 05:19:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.121 05:19:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.121 05:19:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:46.121 05:19:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.121 05:19:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:46.121 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:46.121 05:19:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.121 05:19:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:46.121 05:19:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:46.121 05:19:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:46.121 05:19:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:46.121 05:19:02 -- nvmf/common.sh@57 -- # uname 00:16:46.121 05:19:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:46.121 05:19:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:46.121 05:19:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:46.121 05:19:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:46.121 05:19:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:46.121 05:19:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:46.121 05:19:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:46.121 05:19:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:46.121 05:19:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:46.121 05:19:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:46.121 05:19:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:46.121 05:19:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.121 05:19:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:46.121 05:19:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:46.121 05:19:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.121 05:19:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:46.121 05:19:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.121 05:19:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:46.121 05:19:02 -- nvmf/common.sh@104 -- # continue 2 00:16:46.121 05:19:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.121 05:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.122 05:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@104 -- # continue 2 00:16:46.122 05:19:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:46.122 05:19:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:46.122 05:19:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:46.122 05:19:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:46.122 05:19:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:46.122 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.122 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:46.122 altname enp217s0f0np0 00:16:46.122 altname ens818f0np0 00:16:46.122 inet 192.168.100.8/24 scope global mlx_0_0 00:16:46.122 valid_lft forever preferred_lft forever 00:16:46.122 05:19:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:46.122 05:19:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:46.122 05:19:02 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:46.122 05:19:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:46.122 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.122 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:46.122 altname enp217s0f1np1 00:16:46.122 altname ens818f1np1 00:16:46.122 inet 192.168.100.9/24 scope global mlx_0_1 00:16:46.122 valid_lft forever preferred_lft forever 00:16:46.122 05:19:02 -- nvmf/common.sh@410 -- # return 0 00:16:46.122 05:19:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:46.122 05:19:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:46.122 05:19:02 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:46.122 05:19:02 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:46.122 05:19:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.122 05:19:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:46.122 05:19:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:46.122 05:19:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.122 05:19:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:46.122 05:19:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:46.122 05:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.122 05:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:46.122 05:19:02 -- nvmf/common.sh@104 -- # continue 2 00:16:46.122 05:19:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:46.122 05:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.122 05:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.122 05:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.122 05:19:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@104 -- # continue 2 00:16:46.122 05:19:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:46.122 05:19:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:46.122 05:19:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:46.122 05:19:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:46.122 05:19:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:46.122 05:19:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:46.122 05:19:02 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:46.122 192.168.100.9' 00:16:46.122 05:19:02 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:46.122 192.168.100.9' 00:16:46.122 05:19:02 -- nvmf/common.sh@445 -- # head -n 1 00:16:46.122 05:19:02 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:46.122 05:19:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:46.122 192.168.100.9' 00:16:46.122 05:19:02 -- nvmf/common.sh@446 -- # tail -n +2 00:16:46.122 05:19:02 -- nvmf/common.sh@446 -- # head -n 1 00:16:46.122 05:19:02 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:46.382 05:19:02 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:46.382 05:19:02 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:46.382 05:19:02 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:46.382 05:19:02 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:46.382 05:19:02 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:46.382 05:19:02 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:46.382 05:19:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:46.382 05:19:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:46.382 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:46.382 05:19:02 -- nvmf/common.sh@469 -- # nvmfpid=1784062 00:16:46.382 05:19:02 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:46.383 05:19:02 -- nvmf/common.sh@470 -- # waitforlisten 1784062 00:16:46.383 05:19:02 -- common/autotest_common.sh@829 -- # '[' -z 1784062 ']' 00:16:46.383 05:19:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.383 05:19:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.383 05:19:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.383 05:19:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.383 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:46.383 [2024-11-19 05:19:02.765234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.383 [2024-11-19 05:19:02.765287] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.383 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.383 [2024-11-19 05:19:02.837384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.383 [2024-11-19 05:19:02.875208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:46.383 [2024-11-19 05:19:02.875347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.383 [2024-11-19 05:19:02.875357] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.383 [2024-11-19 05:19:02.875366] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.383 [2024-11-19 05:19:02.875490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.383 [2024-11-19 05:19:02.875584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.383 [2024-11-19 05:19:02.875586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.321 05:19:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.321 05:19:03 -- common/autotest_common.sh@862 -- # return 0 00:16:47.321 05:19:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:47.321 05:19:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.321 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.321 05:19:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.321 05:19:03 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:47.321 05:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.321 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.321 [2024-11-19 05:19:03.651926] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5539c0/0x557eb0) succeed. 00:16:47.321 [2024-11-19 05:19:03.661097] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x554f10/0x599550) succeed. 00:16:47.321 05:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.321 05:19:03 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:47.321 05:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.321 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.321 05:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.321 05:19:03 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:47.321 05:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.321 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.321 [2024-11-19 05:19:03.773784] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:47.321 05:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.321 05:19:03 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:47.321 05:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.321 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.321 NULL1 00:16:47.321 05:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.321 05:19:03 -- target/connect_stress.sh@21 -- # PERF_PID=1784351 00:16:47.322 05:19:03 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:47.322 05:19:03 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:47.322 05:19:03 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.322 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.322 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.581 05:19:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.581 05:19:03 -- target/connect_stress.sh@28 -- # cat 00:16:47.581 05:19:03 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:47.581 05:19:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.581 05:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.581 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.841 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.841 05:19:04 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:47.841 05:19:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.841 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.841 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.100 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.100 05:19:04 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:48.100 05:19:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.100 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.100 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.360 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.360 05:19:04 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:48.360 05:19:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.360 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.360 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.930 05:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.930 05:19:05 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:48.930 05:19:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.930 05:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.930 05:19:05 -- common/autotest_common.sh@10 -- # set +x 00:16:49.188 05:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.188 05:19:05 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:49.188 05:19:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.188 05:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.188 05:19:05 -- common/autotest_common.sh@10 -- # set +x 00:16:49.447 05:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.447 05:19:05 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:49.447 05:19:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.447 05:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.447 05:19:05 -- common/autotest_common.sh@10 -- # set +x 00:16:49.705 05:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.705 05:19:06 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:49.705 05:19:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.705 05:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.705 05:19:06 -- common/autotest_common.sh@10 -- # set +x 00:16:49.964 05:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.964 05:19:06 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:49.964 05:19:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.964 05:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.964 05:19:06 -- common/autotest_common.sh@10 -- # set +x 00:16:50.532 05:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.532 05:19:06 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:50.532 05:19:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.532 05:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.532 05:19:06 -- common/autotest_common.sh@10 -- # set +x 00:16:50.791 05:19:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.791 05:19:07 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:50.791 05:19:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.791 05:19:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.791 05:19:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.051 05:19:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.051 05:19:07 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:51.051 05:19:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.051 05:19:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.051 05:19:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.310 05:19:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.310 05:19:07 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:51.310 05:19:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.310 05:19:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.310 05:19:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.878 05:19:08 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:51.878 05:19:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.878 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.878 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.138 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.138 05:19:08 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:52.138 05:19:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.138 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.138 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.398 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.398 05:19:08 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:52.398 05:19:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.398 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.398 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.657 05:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.657 05:19:09 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:52.657 05:19:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.657 05:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.657 05:19:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.916 05:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.916 05:19:09 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:52.916 05:19:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.916 05:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.916 05:19:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 05:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.484 05:19:09 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:53.484 05:19:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.484 05:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.484 05:19:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.743 05:19:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.743 05:19:10 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:53.743 05:19:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.743 05:19:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.743 05:19:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.002 05:19:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.002 05:19:10 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:54.002 05:19:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.002 05:19:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.002 05:19:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.261 05:19:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.261 05:19:10 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:54.261 05:19:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.261 05:19:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.261 05:19:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.520 05:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.520 05:19:11 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:54.520 05:19:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.520 05:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.520 05:19:11 -- common/autotest_common.sh@10 -- # set +x 00:16:55.088 05:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.088 05:19:11 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:55.088 05:19:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.088 05:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.088 05:19:11 -- common/autotest_common.sh@10 -- # set +x 00:16:55.346 05:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.346 05:19:11 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:55.346 05:19:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.346 05:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.346 05:19:11 -- common/autotest_common.sh@10 -- # set +x 00:16:55.606 05:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.606 05:19:12 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:55.606 05:19:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.606 05:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.606 05:19:12 -- common/autotest_common.sh@10 -- # set +x 00:16:55.864 05:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.864 05:19:12 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:55.864 05:19:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.864 05:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.864 05:19:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.433 05:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.433 05:19:12 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:56.433 05:19:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.433 05:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.433 05:19:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.693 05:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.693 05:19:13 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:56.693 05:19:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.693 05:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.693 05:19:13 -- common/autotest_common.sh@10 -- # set +x 00:16:56.953 05:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.953 05:19:13 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:56.953 05:19:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.953 05:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.953 05:19:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.213 05:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.213 05:19:13 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:57.213 05:19:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.213 05:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.213 05:19:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:57.472 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.472 05:19:14 -- target/connect_stress.sh@34 -- # kill -0 1784351 00:16:57.472 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1784351) - No such process 00:16:57.472 05:19:14 -- target/connect_stress.sh@38 -- # wait 1784351 00:16:57.472 05:19:14 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:57.472 05:19:14 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:57.472 05:19:14 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:57.472 05:19:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:57.472 05:19:14 -- nvmf/common.sh@116 -- # sync 00:16:57.472 05:19:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:57.472 05:19:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:57.472 05:19:14 -- nvmf/common.sh@119 -- # set +e 00:16:57.472 05:19:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:57.472 05:19:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:57.731 rmmod nvme_rdma 00:16:57.732 rmmod nvme_fabrics 00:16:57.732 05:19:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:57.732 05:19:14 -- nvmf/common.sh@123 -- # set -e 00:16:57.732 05:19:14 -- nvmf/common.sh@124 -- # return 0 00:16:57.732 05:19:14 -- nvmf/common.sh@477 -- # '[' -n 1784062 ']' 00:16:57.732 05:19:14 -- nvmf/common.sh@478 -- # killprocess 1784062 00:16:57.732 05:19:14 -- common/autotest_common.sh@936 -- # '[' -z 1784062 ']' 00:16:57.732 05:19:14 -- common/autotest_common.sh@940 -- # kill -0 1784062 00:16:57.732 05:19:14 -- common/autotest_common.sh@941 -- # uname 00:16:57.732 05:19:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.732 05:19:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1784062 00:16:57.732 05:19:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:57.732 05:19:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:57.732 05:19:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1784062' 00:16:57.732 killing process with pid 1784062 00:16:57.732 05:19:14 -- common/autotest_common.sh@955 -- # kill 1784062 00:16:57.732 05:19:14 -- common/autotest_common.sh@960 -- # wait 1784062 00:16:57.991 05:19:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:57.991 05:19:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:57.991 00:16:57.991 real 0m18.733s 00:16:57.991 user 0m41.931s 00:16:57.991 sys 0m7.742s 00:16:57.991 05:19:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:57.991 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:16:57.991 ************************************ 00:16:57.991 END TEST nvmf_connect_stress 00:16:57.991 ************************************ 00:16:57.991 05:19:14 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:57.991 05:19:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:57.991 05:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:57.991 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:16:57.991 ************************************ 00:16:57.991 START TEST nvmf_fused_ordering 00:16:57.991 ************************************ 00:16:57.991 05:19:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:57.991 * Looking for test storage... 00:16:57.991 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:57.991 05:19:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:57.991 05:19:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:57.991 05:19:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:58.251 05:19:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:58.251 05:19:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:58.251 05:19:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:58.251 05:19:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:58.251 05:19:14 -- scripts/common.sh@335 -- # IFS=.-: 00:16:58.251 05:19:14 -- scripts/common.sh@335 -- # read -ra ver1 00:16:58.251 05:19:14 -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.251 05:19:14 -- scripts/common.sh@336 -- # read -ra ver2 00:16:58.251 05:19:14 -- scripts/common.sh@337 -- # local 'op=<' 00:16:58.251 05:19:14 -- scripts/common.sh@339 -- # ver1_l=2 00:16:58.251 05:19:14 -- scripts/common.sh@340 -- # ver2_l=1 00:16:58.251 05:19:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:58.251 05:19:14 -- scripts/common.sh@343 -- # case "$op" in 00:16:58.251 05:19:14 -- scripts/common.sh@344 -- # : 1 00:16:58.251 05:19:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:58.251 05:19:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.251 05:19:14 -- scripts/common.sh@364 -- # decimal 1 00:16:58.251 05:19:14 -- scripts/common.sh@352 -- # local d=1 00:16:58.251 05:19:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.251 05:19:14 -- scripts/common.sh@354 -- # echo 1 00:16:58.251 05:19:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:58.251 05:19:14 -- scripts/common.sh@365 -- # decimal 2 00:16:58.251 05:19:14 -- scripts/common.sh@352 -- # local d=2 00:16:58.251 05:19:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.251 05:19:14 -- scripts/common.sh@354 -- # echo 2 00:16:58.251 05:19:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:58.251 05:19:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:58.251 05:19:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:58.251 05:19:14 -- scripts/common.sh@367 -- # return 0 00:16:58.251 05:19:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.251 05:19:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:58.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.251 --rc genhtml_branch_coverage=1 00:16:58.251 --rc genhtml_function_coverage=1 00:16:58.251 --rc genhtml_legend=1 00:16:58.251 --rc geninfo_all_blocks=1 00:16:58.251 --rc geninfo_unexecuted_blocks=1 00:16:58.251 00:16:58.251 ' 00:16:58.251 05:19:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:58.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.251 --rc genhtml_branch_coverage=1 00:16:58.251 --rc genhtml_function_coverage=1 00:16:58.251 --rc genhtml_legend=1 00:16:58.251 --rc geninfo_all_blocks=1 00:16:58.251 --rc geninfo_unexecuted_blocks=1 00:16:58.251 00:16:58.251 ' 00:16:58.251 05:19:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:58.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.251 --rc genhtml_branch_coverage=1 00:16:58.251 --rc genhtml_function_coverage=1 00:16:58.251 --rc genhtml_legend=1 00:16:58.251 --rc geninfo_all_blocks=1 00:16:58.251 --rc geninfo_unexecuted_blocks=1 00:16:58.251 00:16:58.251 ' 00:16:58.251 05:19:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:58.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.251 --rc genhtml_branch_coverage=1 00:16:58.251 --rc genhtml_function_coverage=1 00:16:58.251 --rc genhtml_legend=1 00:16:58.251 --rc geninfo_all_blocks=1 00:16:58.251 --rc geninfo_unexecuted_blocks=1 00:16:58.251 00:16:58.251 ' 00:16:58.251 05:19:14 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.251 05:19:14 -- nvmf/common.sh@7 -- # uname -s 00:16:58.251 05:19:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.251 05:19:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.251 05:19:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.251 05:19:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.251 05:19:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.251 05:19:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.251 05:19:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.251 05:19:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.251 05:19:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.251 05:19:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.251 05:19:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:58.251 05:19:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:58.251 05:19:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.251 05:19:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.251 05:19:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.251 05:19:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:58.251 05:19:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.251 05:19:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.251 05:19:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.251 05:19:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.251 05:19:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.251 05:19:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.251 05:19:14 -- paths/export.sh@5 -- # export PATH 00:16:58.252 05:19:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.252 05:19:14 -- nvmf/common.sh@46 -- # : 0 00:16:58.252 05:19:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:58.252 05:19:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:58.252 05:19:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:58.252 05:19:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.252 05:19:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.252 05:19:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:58.252 05:19:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:58.252 05:19:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:58.252 05:19:14 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:58.252 05:19:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:58.252 05:19:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.252 05:19:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:58.252 05:19:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:58.252 05:19:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:58.252 05:19:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.252 05:19:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.252 05:19:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.252 05:19:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:58.252 05:19:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:58.252 05:19:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:58.252 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:17:04.826 05:19:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:04.826 05:19:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:04.826 05:19:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:04.826 05:19:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:04.826 05:19:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:04.826 05:19:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:04.827 05:19:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:04.827 05:19:21 -- nvmf/common.sh@294 -- # net_devs=() 00:17:04.827 05:19:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:04.827 05:19:21 -- nvmf/common.sh@295 -- # e810=() 00:17:04.827 05:19:21 -- nvmf/common.sh@295 -- # local -ga e810 00:17:04.827 05:19:21 -- nvmf/common.sh@296 -- # x722=() 00:17:04.827 05:19:21 -- nvmf/common.sh@296 -- # local -ga x722 00:17:04.827 05:19:21 -- nvmf/common.sh@297 -- # mlx=() 00:17:04.827 05:19:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:04.827 05:19:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.827 05:19:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:04.827 05:19:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:04.827 05:19:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:04.827 05:19:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:04.827 05:19:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:04.827 05:19:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:04.827 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:04.827 05:19:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:04.827 05:19:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:04.827 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:04.827 05:19:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:04.827 05:19:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:04.827 05:19:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.827 05:19:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:04.827 05:19:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.827 05:19:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:04.827 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.827 05:19:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.827 05:19:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:04.827 05:19:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.827 05:19:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:04.827 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:04.827 05:19:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.827 05:19:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:04.827 05:19:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:04.827 05:19:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:04.827 05:19:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:04.827 05:19:21 -- nvmf/common.sh@57 -- # uname 00:17:04.827 05:19:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:04.827 05:19:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:04.827 05:19:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:04.827 05:19:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:04.827 05:19:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:04.827 05:19:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:04.827 05:19:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:04.827 05:19:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:04.827 05:19:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:04.827 05:19:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:04.827 05:19:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:04.827 05:19:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:04.827 05:19:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:04.827 05:19:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:04.827 05:19:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:04.827 05:19:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:04.827 05:19:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@104 -- # continue 2 00:17:04.827 05:19:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:04.827 05:19:21 -- nvmf/common.sh@104 -- # continue 2 00:17:04.827 05:19:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:04.827 05:19:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:04.827 05:19:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:04.827 05:19:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:04.827 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:04.827 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:04.827 altname enp217s0f0np0 00:17:04.827 altname ens818f0np0 00:17:04.827 inet 192.168.100.8/24 scope global mlx_0_0 00:17:04.827 valid_lft forever preferred_lft forever 00:17:04.827 05:19:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:04.827 05:19:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:04.827 05:19:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:04.827 05:19:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:04.827 05:19:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:04.827 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:04.827 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:04.827 altname enp217s0f1np1 00:17:04.827 altname ens818f1np1 00:17:04.827 inet 192.168.100.9/24 scope global mlx_0_1 00:17:04.827 valid_lft forever preferred_lft forever 00:17:04.827 05:19:21 -- nvmf/common.sh@410 -- # return 0 00:17:04.827 05:19:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:04.827 05:19:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:04.827 05:19:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:04.827 05:19:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:04.827 05:19:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:04.827 05:19:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:04.827 05:19:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:04.827 05:19:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:04.827 05:19:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:04.827 05:19:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@104 -- # continue 2 00:17:04.827 05:19:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.827 05:19:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:04.827 05:19:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:04.827 05:19:21 -- nvmf/common.sh@104 -- # continue 2 00:17:04.827 05:19:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:04.827 05:19:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:04.827 05:19:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:04.828 05:19:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:04.828 05:19:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:04.828 05:19:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:04.828 05:19:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:04.828 05:19:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:04.828 05:19:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:04.828 05:19:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:04.828 05:19:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:04.828 192.168.100.9' 00:17:04.828 05:19:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:04.828 192.168.100.9' 00:17:04.828 05:19:21 -- nvmf/common.sh@445 -- # head -n 1 00:17:04.828 05:19:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:04.828 05:19:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:04.828 192.168.100.9' 00:17:04.828 05:19:21 -- nvmf/common.sh@446 -- # tail -n +2 00:17:04.828 05:19:21 -- nvmf/common.sh@446 -- # head -n 1 00:17:04.828 05:19:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:04.828 05:19:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:04.828 05:19:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:04.828 05:19:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:04.828 05:19:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:04.828 05:19:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:04.828 05:19:21 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:04.828 05:19:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:04.828 05:19:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.828 05:19:21 -- common/autotest_common.sh@10 -- # set +x 00:17:04.828 05:19:21 -- nvmf/common.sh@469 -- # nvmfpid=1789433 00:17:04.828 05:19:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.828 05:19:21 -- nvmf/common.sh@470 -- # waitforlisten 1789433 00:17:04.828 05:19:21 -- common/autotest_common.sh@829 -- # '[' -z 1789433 ']' 00:17:04.828 05:19:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.828 05:19:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.828 05:19:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.828 05:19:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.828 05:19:21 -- common/autotest_common.sh@10 -- # set +x 00:17:04.828 [2024-11-19 05:19:21.369703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.828 [2024-11-19 05:19:21.369767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.087 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.087 [2024-11-19 05:19:21.441261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.087 [2024-11-19 05:19:21.480043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:05.087 [2024-11-19 05:19:21.480154] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.087 [2024-11-19 05:19:21.480164] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.087 [2024-11-19 05:19:21.480173] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.087 [2024-11-19 05:19:21.480193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.656 05:19:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.656 05:19:22 -- common/autotest_common.sh@862 -- # return 0 00:17:05.656 05:19:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.656 05:19:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.656 05:19:22 -- common/autotest_common.sh@10 -- # set +x 00:17:05.916 05:19:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.916 05:19:22 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:05.916 05:19:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.916 05:19:22 -- common/autotest_common.sh@10 -- # set +x 00:17:05.916 [2024-11-19 05:19:22.252276] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x242a3a0/0x242e890) succeed. 00:17:05.916 [2024-11-19 05:19:22.261827] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x242b8a0/0x246ff30) succeed. 00:17:05.916 05:19:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.916 05:19:22 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:05.916 05:19:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.916 05:19:22 -- common/autotest_common.sh@10 -- # set +x 00:17:05.916 05:19:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.916 05:19:22 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:05.916 05:19:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.916 05:19:22 -- common/autotest_common.sh@10 -- # set +x 00:17:05.916 [2024-11-19 05:19:22.321706] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:05.916 05:19:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.916 05:19:22 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:05.916 05:19:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.916 05:19:22 -- common/autotest_common.sh@10 -- # set +x 00:17:05.916 NULL1 00:17:05.916 05:19:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.916 05:19:22 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:05.916 05:19:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.916 05:19:22 -- common/autotest_common.sh@10 -- # set +x 00:17:05.916 05:19:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.916 05:19:22 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:05.916 05:19:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.916 05:19:22 -- common/autotest_common.sh@10 -- # set +x 00:17:05.916 05:19:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.916 05:19:22 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:05.916 [2024-11-19 05:19:22.377076] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:05.916 [2024-11-19 05:19:22.377111] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789650 ] 00:17:05.916 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.177 Attached to nqn.2016-06.io.spdk:cnode1 00:17:06.177 Namespace ID: 1 size: 1GB 00:17:06.177 fused_ordering(0) 00:17:06.177 fused_ordering(1) 00:17:06.177 fused_ordering(2) 00:17:06.177 fused_ordering(3) 00:17:06.177 fused_ordering(4) 00:17:06.177 fused_ordering(5) 00:17:06.177 fused_ordering(6) 00:17:06.177 fused_ordering(7) 00:17:06.177 fused_ordering(8) 00:17:06.177 fused_ordering(9) 00:17:06.177 fused_ordering(10) 00:17:06.177 fused_ordering(11) 00:17:06.177 fused_ordering(12) 00:17:06.177 fused_ordering(13) 00:17:06.177 fused_ordering(14) 00:17:06.177 fused_ordering(15) 00:17:06.177 fused_ordering(16) 00:17:06.177 fused_ordering(17) 00:17:06.177 fused_ordering(18) 00:17:06.177 fused_ordering(19) 00:17:06.177 fused_ordering(20) 00:17:06.177 fused_ordering(21) 00:17:06.177 fused_ordering(22) 00:17:06.177 fused_ordering(23) 00:17:06.177 fused_ordering(24) 00:17:06.177 fused_ordering(25) 00:17:06.177 fused_ordering(26) 00:17:06.177 fused_ordering(27) 00:17:06.177 fused_ordering(28) 00:17:06.177 fused_ordering(29) 00:17:06.177 fused_ordering(30) 00:17:06.177 fused_ordering(31) 00:17:06.177 fused_ordering(32) 00:17:06.177 fused_ordering(33) 00:17:06.177 fused_ordering(34) 00:17:06.177 fused_ordering(35) 00:17:06.177 fused_ordering(36) 00:17:06.177 fused_ordering(37) 00:17:06.177 fused_ordering(38) 00:17:06.177 fused_ordering(39) 00:17:06.177 fused_ordering(40) 00:17:06.177 fused_ordering(41) 00:17:06.177 fused_ordering(42) 00:17:06.177 fused_ordering(43) 00:17:06.177 fused_ordering(44) 00:17:06.177 fused_ordering(45) 00:17:06.177 fused_ordering(46) 00:17:06.177 fused_ordering(47) 00:17:06.177 fused_ordering(48) 00:17:06.177 fused_ordering(49) 00:17:06.177 fused_ordering(50) 00:17:06.177 fused_ordering(51) 00:17:06.177 fused_ordering(52) 00:17:06.177 fused_ordering(53) 00:17:06.177 fused_ordering(54) 00:17:06.177 fused_ordering(55) 00:17:06.177 fused_ordering(56) 00:17:06.177 fused_ordering(57) 00:17:06.177 fused_ordering(58) 00:17:06.177 fused_ordering(59) 00:17:06.177 fused_ordering(60) 00:17:06.177 fused_ordering(61) 00:17:06.177 fused_ordering(62) 00:17:06.177 fused_ordering(63) 00:17:06.177 fused_ordering(64) 00:17:06.177 fused_ordering(65) 00:17:06.177 fused_ordering(66) 00:17:06.177 fused_ordering(67) 00:17:06.177 fused_ordering(68) 00:17:06.177 fused_ordering(69) 00:17:06.177 fused_ordering(70) 00:17:06.177 fused_ordering(71) 00:17:06.177 fused_ordering(72) 00:17:06.177 fused_ordering(73) 00:17:06.177 fused_ordering(74) 00:17:06.177 fused_ordering(75) 00:17:06.177 fused_ordering(76) 00:17:06.177 fused_ordering(77) 00:17:06.177 fused_ordering(78) 00:17:06.177 fused_ordering(79) 00:17:06.177 fused_ordering(80) 00:17:06.177 fused_ordering(81) 00:17:06.177 fused_ordering(82) 00:17:06.177 fused_ordering(83) 00:17:06.177 fused_ordering(84) 00:17:06.177 fused_ordering(85) 00:17:06.177 fused_ordering(86) 00:17:06.177 fused_ordering(87) 00:17:06.177 fused_ordering(88) 00:17:06.177 fused_ordering(89) 00:17:06.177 fused_ordering(90) 00:17:06.177 fused_ordering(91) 00:17:06.177 fused_ordering(92) 00:17:06.177 fused_ordering(93) 00:17:06.177 fused_ordering(94) 00:17:06.177 fused_ordering(95) 00:17:06.177 fused_ordering(96) 00:17:06.177 fused_ordering(97) 00:17:06.177 fused_ordering(98) 00:17:06.177 fused_ordering(99) 00:17:06.177 fused_ordering(100) 00:17:06.177 fused_ordering(101) 00:17:06.177 fused_ordering(102) 00:17:06.177 fused_ordering(103) 00:17:06.177 fused_ordering(104) 00:17:06.177 fused_ordering(105) 00:17:06.177 fused_ordering(106) 00:17:06.177 fused_ordering(107) 00:17:06.177 fused_ordering(108) 00:17:06.177 fused_ordering(109) 00:17:06.177 fused_ordering(110) 00:17:06.177 fused_ordering(111) 00:17:06.177 fused_ordering(112) 00:17:06.177 fused_ordering(113) 00:17:06.177 fused_ordering(114) 00:17:06.177 fused_ordering(115) 00:17:06.177 fused_ordering(116) 00:17:06.177 fused_ordering(117) 00:17:06.177 fused_ordering(118) 00:17:06.177 fused_ordering(119) 00:17:06.177 fused_ordering(120) 00:17:06.177 fused_ordering(121) 00:17:06.177 fused_ordering(122) 00:17:06.177 fused_ordering(123) 00:17:06.177 fused_ordering(124) 00:17:06.177 fused_ordering(125) 00:17:06.177 fused_ordering(126) 00:17:06.177 fused_ordering(127) 00:17:06.177 fused_ordering(128) 00:17:06.177 fused_ordering(129) 00:17:06.177 fused_ordering(130) 00:17:06.177 fused_ordering(131) 00:17:06.177 fused_ordering(132) 00:17:06.177 fused_ordering(133) 00:17:06.177 fused_ordering(134) 00:17:06.177 fused_ordering(135) 00:17:06.177 fused_ordering(136) 00:17:06.177 fused_ordering(137) 00:17:06.177 fused_ordering(138) 00:17:06.177 fused_ordering(139) 00:17:06.177 fused_ordering(140) 00:17:06.177 fused_ordering(141) 00:17:06.177 fused_ordering(142) 00:17:06.177 fused_ordering(143) 00:17:06.177 fused_ordering(144) 00:17:06.177 fused_ordering(145) 00:17:06.177 fused_ordering(146) 00:17:06.177 fused_ordering(147) 00:17:06.177 fused_ordering(148) 00:17:06.177 fused_ordering(149) 00:17:06.177 fused_ordering(150) 00:17:06.177 fused_ordering(151) 00:17:06.177 fused_ordering(152) 00:17:06.177 fused_ordering(153) 00:17:06.177 fused_ordering(154) 00:17:06.177 fused_ordering(155) 00:17:06.177 fused_ordering(156) 00:17:06.177 fused_ordering(157) 00:17:06.177 fused_ordering(158) 00:17:06.177 fused_ordering(159) 00:17:06.177 fused_ordering(160) 00:17:06.177 fused_ordering(161) 00:17:06.177 fused_ordering(162) 00:17:06.177 fused_ordering(163) 00:17:06.178 fused_ordering(164) 00:17:06.178 fused_ordering(165) 00:17:06.178 fused_ordering(166) 00:17:06.178 fused_ordering(167) 00:17:06.178 fused_ordering(168) 00:17:06.178 fused_ordering(169) 00:17:06.178 fused_ordering(170) 00:17:06.178 fused_ordering(171) 00:17:06.178 fused_ordering(172) 00:17:06.178 fused_ordering(173) 00:17:06.178 fused_ordering(174) 00:17:06.178 fused_ordering(175) 00:17:06.178 fused_ordering(176) 00:17:06.178 fused_ordering(177) 00:17:06.178 fused_ordering(178) 00:17:06.178 fused_ordering(179) 00:17:06.178 fused_ordering(180) 00:17:06.178 fused_ordering(181) 00:17:06.178 fused_ordering(182) 00:17:06.178 fused_ordering(183) 00:17:06.178 fused_ordering(184) 00:17:06.178 fused_ordering(185) 00:17:06.178 fused_ordering(186) 00:17:06.178 fused_ordering(187) 00:17:06.178 fused_ordering(188) 00:17:06.178 fused_ordering(189) 00:17:06.178 fused_ordering(190) 00:17:06.178 fused_ordering(191) 00:17:06.178 fused_ordering(192) 00:17:06.178 fused_ordering(193) 00:17:06.178 fused_ordering(194) 00:17:06.178 fused_ordering(195) 00:17:06.178 fused_ordering(196) 00:17:06.178 fused_ordering(197) 00:17:06.178 fused_ordering(198) 00:17:06.178 fused_ordering(199) 00:17:06.178 fused_ordering(200) 00:17:06.178 fused_ordering(201) 00:17:06.178 fused_ordering(202) 00:17:06.178 fused_ordering(203) 00:17:06.178 fused_ordering(204) 00:17:06.178 fused_ordering(205) 00:17:06.178 fused_ordering(206) 00:17:06.178 fused_ordering(207) 00:17:06.178 fused_ordering(208) 00:17:06.178 fused_ordering(209) 00:17:06.178 fused_ordering(210) 00:17:06.178 fused_ordering(211) 00:17:06.178 fused_ordering(212) 00:17:06.178 fused_ordering(213) 00:17:06.178 fused_ordering(214) 00:17:06.178 fused_ordering(215) 00:17:06.178 fused_ordering(216) 00:17:06.178 fused_ordering(217) 00:17:06.178 fused_ordering(218) 00:17:06.178 fused_ordering(219) 00:17:06.178 fused_ordering(220) 00:17:06.178 fused_ordering(221) 00:17:06.178 fused_ordering(222) 00:17:06.178 fused_ordering(223) 00:17:06.178 fused_ordering(224) 00:17:06.178 fused_ordering(225) 00:17:06.178 fused_ordering(226) 00:17:06.178 fused_ordering(227) 00:17:06.178 fused_ordering(228) 00:17:06.178 fused_ordering(229) 00:17:06.178 fused_ordering(230) 00:17:06.178 fused_ordering(231) 00:17:06.178 fused_ordering(232) 00:17:06.178 fused_ordering(233) 00:17:06.178 fused_ordering(234) 00:17:06.178 fused_ordering(235) 00:17:06.178 fused_ordering(236) 00:17:06.178 fused_ordering(237) 00:17:06.178 fused_ordering(238) 00:17:06.178 fused_ordering(239) 00:17:06.178 fused_ordering(240) 00:17:06.178 fused_ordering(241) 00:17:06.178 fused_ordering(242) 00:17:06.178 fused_ordering(243) 00:17:06.178 fused_ordering(244) 00:17:06.178 fused_ordering(245) 00:17:06.178 fused_ordering(246) 00:17:06.178 fused_ordering(247) 00:17:06.178 fused_ordering(248) 00:17:06.178 fused_ordering(249) 00:17:06.178 fused_ordering(250) 00:17:06.178 fused_ordering(251) 00:17:06.178 fused_ordering(252) 00:17:06.178 fused_ordering(253) 00:17:06.178 fused_ordering(254) 00:17:06.178 fused_ordering(255) 00:17:06.178 fused_ordering(256) 00:17:06.178 fused_ordering(257) 00:17:06.178 fused_ordering(258) 00:17:06.178 fused_ordering(259) 00:17:06.178 fused_ordering(260) 00:17:06.178 fused_ordering(261) 00:17:06.178 fused_ordering(262) 00:17:06.178 fused_ordering(263) 00:17:06.178 fused_ordering(264) 00:17:06.178 fused_ordering(265) 00:17:06.178 fused_ordering(266) 00:17:06.178 fused_ordering(267) 00:17:06.178 fused_ordering(268) 00:17:06.178 fused_ordering(269) 00:17:06.178 fused_ordering(270) 00:17:06.178 fused_ordering(271) 00:17:06.178 fused_ordering(272) 00:17:06.178 fused_ordering(273) 00:17:06.178 fused_ordering(274) 00:17:06.178 fused_ordering(275) 00:17:06.178 fused_ordering(276) 00:17:06.178 fused_ordering(277) 00:17:06.178 fused_ordering(278) 00:17:06.178 fused_ordering(279) 00:17:06.178 fused_ordering(280) 00:17:06.178 fused_ordering(281) 00:17:06.178 fused_ordering(282) 00:17:06.178 fused_ordering(283) 00:17:06.178 fused_ordering(284) 00:17:06.178 fused_ordering(285) 00:17:06.178 fused_ordering(286) 00:17:06.178 fused_ordering(287) 00:17:06.178 fused_ordering(288) 00:17:06.178 fused_ordering(289) 00:17:06.178 fused_ordering(290) 00:17:06.178 fused_ordering(291) 00:17:06.178 fused_ordering(292) 00:17:06.178 fused_ordering(293) 00:17:06.178 fused_ordering(294) 00:17:06.178 fused_ordering(295) 00:17:06.178 fused_ordering(296) 00:17:06.178 fused_ordering(297) 00:17:06.178 fused_ordering(298) 00:17:06.178 fused_ordering(299) 00:17:06.178 fused_ordering(300) 00:17:06.178 fused_ordering(301) 00:17:06.178 fused_ordering(302) 00:17:06.178 fused_ordering(303) 00:17:06.178 fused_ordering(304) 00:17:06.178 fused_ordering(305) 00:17:06.178 fused_ordering(306) 00:17:06.178 fused_ordering(307) 00:17:06.178 fused_ordering(308) 00:17:06.178 fused_ordering(309) 00:17:06.178 fused_ordering(310) 00:17:06.178 fused_ordering(311) 00:17:06.178 fused_ordering(312) 00:17:06.178 fused_ordering(313) 00:17:06.178 fused_ordering(314) 00:17:06.178 fused_ordering(315) 00:17:06.178 fused_ordering(316) 00:17:06.178 fused_ordering(317) 00:17:06.178 fused_ordering(318) 00:17:06.178 fused_ordering(319) 00:17:06.178 fused_ordering(320) 00:17:06.178 fused_ordering(321) 00:17:06.178 fused_ordering(322) 00:17:06.178 fused_ordering(323) 00:17:06.178 fused_ordering(324) 00:17:06.178 fused_ordering(325) 00:17:06.178 fused_ordering(326) 00:17:06.178 fused_ordering(327) 00:17:06.178 fused_ordering(328) 00:17:06.178 fused_ordering(329) 00:17:06.178 fused_ordering(330) 00:17:06.178 fused_ordering(331) 00:17:06.178 fused_ordering(332) 00:17:06.178 fused_ordering(333) 00:17:06.178 fused_ordering(334) 00:17:06.178 fused_ordering(335) 00:17:06.178 fused_ordering(336) 00:17:06.178 fused_ordering(337) 00:17:06.178 fused_ordering(338) 00:17:06.178 fused_ordering(339) 00:17:06.178 fused_ordering(340) 00:17:06.178 fused_ordering(341) 00:17:06.178 fused_ordering(342) 00:17:06.178 fused_ordering(343) 00:17:06.178 fused_ordering(344) 00:17:06.178 fused_ordering(345) 00:17:06.178 fused_ordering(346) 00:17:06.178 fused_ordering(347) 00:17:06.178 fused_ordering(348) 00:17:06.178 fused_ordering(349) 00:17:06.178 fused_ordering(350) 00:17:06.178 fused_ordering(351) 00:17:06.178 fused_ordering(352) 00:17:06.178 fused_ordering(353) 00:17:06.178 fused_ordering(354) 00:17:06.178 fused_ordering(355) 00:17:06.178 fused_ordering(356) 00:17:06.178 fused_ordering(357) 00:17:06.178 fused_ordering(358) 00:17:06.178 fused_ordering(359) 00:17:06.178 fused_ordering(360) 00:17:06.178 fused_ordering(361) 00:17:06.178 fused_ordering(362) 00:17:06.178 fused_ordering(363) 00:17:06.178 fused_ordering(364) 00:17:06.178 fused_ordering(365) 00:17:06.178 fused_ordering(366) 00:17:06.178 fused_ordering(367) 00:17:06.178 fused_ordering(368) 00:17:06.178 fused_ordering(369) 00:17:06.178 fused_ordering(370) 00:17:06.178 fused_ordering(371) 00:17:06.178 fused_ordering(372) 00:17:06.178 fused_ordering(373) 00:17:06.178 fused_ordering(374) 00:17:06.178 fused_ordering(375) 00:17:06.178 fused_ordering(376) 00:17:06.178 fused_ordering(377) 00:17:06.178 fused_ordering(378) 00:17:06.178 fused_ordering(379) 00:17:06.178 fused_ordering(380) 00:17:06.179 fused_ordering(381) 00:17:06.179 fused_ordering(382) 00:17:06.179 fused_ordering(383) 00:17:06.179 fused_ordering(384) 00:17:06.179 fused_ordering(385) 00:17:06.179 fused_ordering(386) 00:17:06.179 fused_ordering(387) 00:17:06.179 fused_ordering(388) 00:17:06.179 fused_ordering(389) 00:17:06.179 fused_ordering(390) 00:17:06.179 fused_ordering(391) 00:17:06.179 fused_ordering(392) 00:17:06.179 fused_ordering(393) 00:17:06.179 fused_ordering(394) 00:17:06.179 fused_ordering(395) 00:17:06.179 fused_ordering(396) 00:17:06.179 fused_ordering(397) 00:17:06.179 fused_ordering(398) 00:17:06.179 fused_ordering(399) 00:17:06.179 fused_ordering(400) 00:17:06.179 fused_ordering(401) 00:17:06.179 fused_ordering(402) 00:17:06.179 fused_ordering(403) 00:17:06.179 fused_ordering(404) 00:17:06.179 fused_ordering(405) 00:17:06.179 fused_ordering(406) 00:17:06.179 fused_ordering(407) 00:17:06.179 fused_ordering(408) 00:17:06.179 fused_ordering(409) 00:17:06.179 fused_ordering(410) 00:17:06.179 fused_ordering(411) 00:17:06.179 fused_ordering(412) 00:17:06.179 fused_ordering(413) 00:17:06.179 fused_ordering(414) 00:17:06.179 fused_ordering(415) 00:17:06.179 fused_ordering(416) 00:17:06.179 fused_ordering(417) 00:17:06.179 fused_ordering(418) 00:17:06.179 fused_ordering(419) 00:17:06.179 fused_ordering(420) 00:17:06.179 fused_ordering(421) 00:17:06.179 fused_ordering(422) 00:17:06.179 fused_ordering(423) 00:17:06.179 fused_ordering(424) 00:17:06.179 fused_ordering(425) 00:17:06.179 fused_ordering(426) 00:17:06.179 fused_ordering(427) 00:17:06.179 fused_ordering(428) 00:17:06.179 fused_ordering(429) 00:17:06.179 fused_ordering(430) 00:17:06.179 fused_ordering(431) 00:17:06.179 fused_ordering(432) 00:17:06.179 fused_ordering(433) 00:17:06.179 fused_ordering(434) 00:17:06.179 fused_ordering(435) 00:17:06.179 fused_ordering(436) 00:17:06.179 fused_ordering(437) 00:17:06.179 fused_ordering(438) 00:17:06.179 fused_ordering(439) 00:17:06.179 fused_ordering(440) 00:17:06.179 fused_ordering(441) 00:17:06.179 fused_ordering(442) 00:17:06.179 fused_ordering(443) 00:17:06.179 fused_ordering(444) 00:17:06.179 fused_ordering(445) 00:17:06.179 fused_ordering(446) 00:17:06.179 fused_ordering(447) 00:17:06.179 fused_ordering(448) 00:17:06.179 fused_ordering(449) 00:17:06.179 fused_ordering(450) 00:17:06.179 fused_ordering(451) 00:17:06.179 fused_ordering(452) 00:17:06.179 fused_ordering(453) 00:17:06.179 fused_ordering(454) 00:17:06.179 fused_ordering(455) 00:17:06.179 fused_ordering(456) 00:17:06.179 fused_ordering(457) 00:17:06.179 fused_ordering(458) 00:17:06.179 fused_ordering(459) 00:17:06.179 fused_ordering(460) 00:17:06.179 fused_ordering(461) 00:17:06.179 fused_ordering(462) 00:17:06.179 fused_ordering(463) 00:17:06.179 fused_ordering(464) 00:17:06.179 fused_ordering(465) 00:17:06.179 fused_ordering(466) 00:17:06.179 fused_ordering(467) 00:17:06.179 fused_ordering(468) 00:17:06.179 fused_ordering(469) 00:17:06.179 fused_ordering(470) 00:17:06.179 fused_ordering(471) 00:17:06.179 fused_ordering(472) 00:17:06.179 fused_ordering(473) 00:17:06.179 fused_ordering(474) 00:17:06.179 fused_ordering(475) 00:17:06.179 fused_ordering(476) 00:17:06.179 fused_ordering(477) 00:17:06.179 fused_ordering(478) 00:17:06.179 fused_ordering(479) 00:17:06.179 fused_ordering(480) 00:17:06.179 fused_ordering(481) 00:17:06.179 fused_ordering(482) 00:17:06.179 fused_ordering(483) 00:17:06.179 fused_ordering(484) 00:17:06.179 fused_ordering(485) 00:17:06.179 fused_ordering(486) 00:17:06.179 fused_ordering(487) 00:17:06.179 fused_ordering(488) 00:17:06.179 fused_ordering(489) 00:17:06.179 fused_ordering(490) 00:17:06.179 fused_ordering(491) 00:17:06.179 fused_ordering(492) 00:17:06.179 fused_ordering(493) 00:17:06.179 fused_ordering(494) 00:17:06.179 fused_ordering(495) 00:17:06.179 fused_ordering(496) 00:17:06.179 fused_ordering(497) 00:17:06.179 fused_ordering(498) 00:17:06.179 fused_ordering(499) 00:17:06.179 fused_ordering(500) 00:17:06.179 fused_ordering(501) 00:17:06.179 fused_ordering(502) 00:17:06.179 fused_ordering(503) 00:17:06.179 fused_ordering(504) 00:17:06.179 fused_ordering(505) 00:17:06.179 fused_ordering(506) 00:17:06.179 fused_ordering(507) 00:17:06.179 fused_ordering(508) 00:17:06.179 fused_ordering(509) 00:17:06.179 fused_ordering(510) 00:17:06.179 fused_ordering(511) 00:17:06.179 fused_ordering(512) 00:17:06.179 fused_ordering(513) 00:17:06.179 fused_ordering(514) 00:17:06.179 fused_ordering(515) 00:17:06.179 fused_ordering(516) 00:17:06.179 fused_ordering(517) 00:17:06.179 fused_ordering(518) 00:17:06.179 fused_ordering(519) 00:17:06.179 fused_ordering(520) 00:17:06.179 fused_ordering(521) 00:17:06.179 fused_ordering(522) 00:17:06.179 fused_ordering(523) 00:17:06.179 fused_ordering(524) 00:17:06.179 fused_ordering(525) 00:17:06.179 fused_ordering(526) 00:17:06.179 fused_ordering(527) 00:17:06.179 fused_ordering(528) 00:17:06.179 fused_ordering(529) 00:17:06.179 fused_ordering(530) 00:17:06.179 fused_ordering(531) 00:17:06.179 fused_ordering(532) 00:17:06.179 fused_ordering(533) 00:17:06.179 fused_ordering(534) 00:17:06.179 fused_ordering(535) 00:17:06.179 fused_ordering(536) 00:17:06.179 fused_ordering(537) 00:17:06.179 fused_ordering(538) 00:17:06.179 fused_ordering(539) 00:17:06.179 fused_ordering(540) 00:17:06.179 fused_ordering(541) 00:17:06.179 fused_ordering(542) 00:17:06.179 fused_ordering(543) 00:17:06.179 fused_ordering(544) 00:17:06.179 fused_ordering(545) 00:17:06.179 fused_ordering(546) 00:17:06.179 fused_ordering(547) 00:17:06.179 fused_ordering(548) 00:17:06.179 fused_ordering(549) 00:17:06.179 fused_ordering(550) 00:17:06.179 fused_ordering(551) 00:17:06.179 fused_ordering(552) 00:17:06.179 fused_ordering(553) 00:17:06.179 fused_ordering(554) 00:17:06.179 fused_ordering(555) 00:17:06.179 fused_ordering(556) 00:17:06.179 fused_ordering(557) 00:17:06.179 fused_ordering(558) 00:17:06.179 fused_ordering(559) 00:17:06.179 fused_ordering(560) 00:17:06.179 fused_ordering(561) 00:17:06.179 fused_ordering(562) 00:17:06.179 fused_ordering(563) 00:17:06.179 fused_ordering(564) 00:17:06.179 fused_ordering(565) 00:17:06.179 fused_ordering(566) 00:17:06.179 fused_ordering(567) 00:17:06.179 fused_ordering(568) 00:17:06.179 fused_ordering(569) 00:17:06.179 fused_ordering(570) 00:17:06.179 fused_ordering(571) 00:17:06.179 fused_ordering(572) 00:17:06.179 fused_ordering(573) 00:17:06.179 fused_ordering(574) 00:17:06.179 fused_ordering(575) 00:17:06.179 fused_ordering(576) 00:17:06.179 fused_ordering(577) 00:17:06.179 fused_ordering(578) 00:17:06.179 fused_ordering(579) 00:17:06.179 fused_ordering(580) 00:17:06.179 fused_ordering(581) 00:17:06.179 fused_ordering(582) 00:17:06.179 fused_ordering(583) 00:17:06.179 fused_ordering(584) 00:17:06.179 fused_ordering(585) 00:17:06.179 fused_ordering(586) 00:17:06.179 fused_ordering(587) 00:17:06.179 fused_ordering(588) 00:17:06.179 fused_ordering(589) 00:17:06.179 fused_ordering(590) 00:17:06.179 fused_ordering(591) 00:17:06.179 fused_ordering(592) 00:17:06.179 fused_ordering(593) 00:17:06.179 fused_ordering(594) 00:17:06.179 fused_ordering(595) 00:17:06.179 fused_ordering(596) 00:17:06.179 fused_ordering(597) 00:17:06.179 fused_ordering(598) 00:17:06.179 fused_ordering(599) 00:17:06.179 fused_ordering(600) 00:17:06.179 fused_ordering(601) 00:17:06.179 fused_ordering(602) 00:17:06.180 fused_ordering(603) 00:17:06.180 fused_ordering(604) 00:17:06.180 fused_ordering(605) 00:17:06.180 fused_ordering(606) 00:17:06.180 fused_ordering(607) 00:17:06.180 fused_ordering(608) 00:17:06.180 fused_ordering(609) 00:17:06.180 fused_ordering(610) 00:17:06.180 fused_ordering(611) 00:17:06.180 fused_ordering(612) 00:17:06.180 fused_ordering(613) 00:17:06.180 fused_ordering(614) 00:17:06.180 fused_ordering(615) 00:17:06.440 fused_ordering(616) 00:17:06.440 fused_ordering(617) 00:17:06.440 fused_ordering(618) 00:17:06.440 fused_ordering(619) 00:17:06.440 fused_ordering(620) 00:17:06.440 fused_ordering(621) 00:17:06.440 fused_ordering(622) 00:17:06.440 fused_ordering(623) 00:17:06.440 fused_ordering(624) 00:17:06.440 fused_ordering(625) 00:17:06.440 fused_ordering(626) 00:17:06.440 fused_ordering(627) 00:17:06.440 fused_ordering(628) 00:17:06.440 fused_ordering(629) 00:17:06.440 fused_ordering(630) 00:17:06.440 fused_ordering(631) 00:17:06.440 fused_ordering(632) 00:17:06.440 fused_ordering(633) 00:17:06.441 fused_ordering(634) 00:17:06.441 fused_ordering(635) 00:17:06.441 fused_ordering(636) 00:17:06.441 fused_ordering(637) 00:17:06.441 fused_ordering(638) 00:17:06.441 fused_ordering(639) 00:17:06.441 fused_ordering(640) 00:17:06.441 fused_ordering(641) 00:17:06.441 fused_ordering(642) 00:17:06.441 fused_ordering(643) 00:17:06.441 fused_ordering(644) 00:17:06.441 fused_ordering(645) 00:17:06.441 fused_ordering(646) 00:17:06.441 fused_ordering(647) 00:17:06.441 fused_ordering(648) 00:17:06.441 fused_ordering(649) 00:17:06.441 fused_ordering(650) 00:17:06.441 fused_ordering(651) 00:17:06.441 fused_ordering(652) 00:17:06.441 fused_ordering(653) 00:17:06.441 fused_ordering(654) 00:17:06.441 fused_ordering(655) 00:17:06.441 fused_ordering(656) 00:17:06.441 fused_ordering(657) 00:17:06.441 fused_ordering(658) 00:17:06.441 fused_ordering(659) 00:17:06.441 fused_ordering(660) 00:17:06.441 fused_ordering(661) 00:17:06.441 fused_ordering(662) 00:17:06.441 fused_ordering(663) 00:17:06.441 fused_ordering(664) 00:17:06.441 fused_ordering(665) 00:17:06.441 fused_ordering(666) 00:17:06.441 fused_ordering(667) 00:17:06.441 fused_ordering(668) 00:17:06.441 fused_ordering(669) 00:17:06.441 fused_ordering(670) 00:17:06.441 fused_ordering(671) 00:17:06.441 fused_ordering(672) 00:17:06.441 fused_ordering(673) 00:17:06.441 fused_ordering(674) 00:17:06.441 fused_ordering(675) 00:17:06.441 fused_ordering(676) 00:17:06.441 fused_ordering(677) 00:17:06.441 fused_ordering(678) 00:17:06.441 fused_ordering(679) 00:17:06.441 fused_ordering(680) 00:17:06.441 fused_ordering(681) 00:17:06.441 fused_ordering(682) 00:17:06.441 fused_ordering(683) 00:17:06.441 fused_ordering(684) 00:17:06.441 fused_ordering(685) 00:17:06.441 fused_ordering(686) 00:17:06.441 fused_ordering(687) 00:17:06.441 fused_ordering(688) 00:17:06.441 fused_ordering(689) 00:17:06.441 fused_ordering(690) 00:17:06.441 fused_ordering(691) 00:17:06.441 fused_ordering(692) 00:17:06.441 fused_ordering(693) 00:17:06.441 fused_ordering(694) 00:17:06.441 fused_ordering(695) 00:17:06.441 fused_ordering(696) 00:17:06.441 fused_ordering(697) 00:17:06.441 fused_ordering(698) 00:17:06.441 fused_ordering(699) 00:17:06.441 fused_ordering(700) 00:17:06.441 fused_ordering(701) 00:17:06.441 fused_ordering(702) 00:17:06.441 fused_ordering(703) 00:17:06.441 fused_ordering(704) 00:17:06.441 fused_ordering(705) 00:17:06.441 fused_ordering(706) 00:17:06.441 fused_ordering(707) 00:17:06.441 fused_ordering(708) 00:17:06.441 fused_ordering(709) 00:17:06.441 fused_ordering(710) 00:17:06.441 fused_ordering(711) 00:17:06.441 fused_ordering(712) 00:17:06.441 fused_ordering(713) 00:17:06.441 fused_ordering(714) 00:17:06.441 fused_ordering(715) 00:17:06.441 fused_ordering(716) 00:17:06.441 fused_ordering(717) 00:17:06.441 fused_ordering(718) 00:17:06.441 fused_ordering(719) 00:17:06.441 fused_ordering(720) 00:17:06.441 fused_ordering(721) 00:17:06.441 fused_ordering(722) 00:17:06.441 fused_ordering(723) 00:17:06.441 fused_ordering(724) 00:17:06.441 fused_ordering(725) 00:17:06.441 fused_ordering(726) 00:17:06.441 fused_ordering(727) 00:17:06.441 fused_ordering(728) 00:17:06.441 fused_ordering(729) 00:17:06.441 fused_ordering(730) 00:17:06.441 fused_ordering(731) 00:17:06.441 fused_ordering(732) 00:17:06.441 fused_ordering(733) 00:17:06.441 fused_ordering(734) 00:17:06.441 fused_ordering(735) 00:17:06.441 fused_ordering(736) 00:17:06.441 fused_ordering(737) 00:17:06.441 fused_ordering(738) 00:17:06.441 fused_ordering(739) 00:17:06.441 fused_ordering(740) 00:17:06.441 fused_ordering(741) 00:17:06.441 fused_ordering(742) 00:17:06.441 fused_ordering(743) 00:17:06.441 fused_ordering(744) 00:17:06.441 fused_ordering(745) 00:17:06.441 fused_ordering(746) 00:17:06.441 fused_ordering(747) 00:17:06.441 fused_ordering(748) 00:17:06.441 fused_ordering(749) 00:17:06.441 fused_ordering(750) 00:17:06.441 fused_ordering(751) 00:17:06.441 fused_ordering(752) 00:17:06.441 fused_ordering(753) 00:17:06.441 fused_ordering(754) 00:17:06.441 fused_ordering(755) 00:17:06.441 fused_ordering(756) 00:17:06.441 fused_ordering(757) 00:17:06.441 fused_ordering(758) 00:17:06.441 fused_ordering(759) 00:17:06.441 fused_ordering(760) 00:17:06.441 fused_ordering(761) 00:17:06.441 fused_ordering(762) 00:17:06.441 fused_ordering(763) 00:17:06.441 fused_ordering(764) 00:17:06.441 fused_ordering(765) 00:17:06.441 fused_ordering(766) 00:17:06.441 fused_ordering(767) 00:17:06.441 fused_ordering(768) 00:17:06.441 fused_ordering(769) 00:17:06.441 fused_ordering(770) 00:17:06.441 fused_ordering(771) 00:17:06.441 fused_ordering(772) 00:17:06.441 fused_ordering(773) 00:17:06.441 fused_ordering(774) 00:17:06.441 fused_ordering(775) 00:17:06.441 fused_ordering(776) 00:17:06.441 fused_ordering(777) 00:17:06.441 fused_ordering(778) 00:17:06.441 fused_ordering(779) 00:17:06.441 fused_ordering(780) 00:17:06.441 fused_ordering(781) 00:17:06.441 fused_ordering(782) 00:17:06.441 fused_ordering(783) 00:17:06.441 fused_ordering(784) 00:17:06.441 fused_ordering(785) 00:17:06.441 fused_ordering(786) 00:17:06.441 fused_ordering(787) 00:17:06.441 fused_ordering(788) 00:17:06.441 fused_ordering(789) 00:17:06.441 fused_ordering(790) 00:17:06.441 fused_ordering(791) 00:17:06.441 fused_ordering(792) 00:17:06.441 fused_ordering(793) 00:17:06.441 fused_ordering(794) 00:17:06.441 fused_ordering(795) 00:17:06.441 fused_ordering(796) 00:17:06.441 fused_ordering(797) 00:17:06.441 fused_ordering(798) 00:17:06.441 fused_ordering(799) 00:17:06.441 fused_ordering(800) 00:17:06.441 fused_ordering(801) 00:17:06.441 fused_ordering(802) 00:17:06.441 fused_ordering(803) 00:17:06.441 fused_ordering(804) 00:17:06.441 fused_ordering(805) 00:17:06.441 fused_ordering(806) 00:17:06.441 fused_ordering(807) 00:17:06.441 fused_ordering(808) 00:17:06.441 fused_ordering(809) 00:17:06.441 fused_ordering(810) 00:17:06.441 fused_ordering(811) 00:17:06.441 fused_ordering(812) 00:17:06.441 fused_ordering(813) 00:17:06.441 fused_ordering(814) 00:17:06.441 fused_ordering(815) 00:17:06.441 fused_ordering(816) 00:17:06.441 fused_ordering(817) 00:17:06.441 fused_ordering(818) 00:17:06.441 fused_ordering(819) 00:17:06.441 fused_ordering(820) 00:17:06.700 fused_ordering(821) 00:17:06.700 fused_ordering(822) 00:17:06.700 fused_ordering(823) 00:17:06.700 fused_ordering(824) 00:17:06.700 fused_ordering(825) 00:17:06.700 fused_ordering(826) 00:17:06.700 fused_ordering(827) 00:17:06.700 fused_ordering(828) 00:17:06.700 fused_ordering(829) 00:17:06.700 fused_ordering(830) 00:17:06.700 fused_ordering(831) 00:17:06.700 fused_ordering(832) 00:17:06.700 fused_ordering(833) 00:17:06.700 fused_ordering(834) 00:17:06.700 fused_ordering(835) 00:17:06.700 fused_ordering(836) 00:17:06.700 fused_ordering(837) 00:17:06.700 fused_ordering(838) 00:17:06.700 fused_ordering(839) 00:17:06.700 fused_ordering(840) 00:17:06.700 fused_ordering(841) 00:17:06.700 fused_ordering(842) 00:17:06.700 fused_ordering(843) 00:17:06.700 fused_ordering(844) 00:17:06.700 fused_ordering(845) 00:17:06.700 fused_ordering(846) 00:17:06.700 fused_ordering(847) 00:17:06.700 fused_ordering(848) 00:17:06.700 fused_ordering(849) 00:17:06.700 fused_ordering(850) 00:17:06.700 fused_ordering(851) 00:17:06.700 fused_ordering(852) 00:17:06.700 fused_ordering(853) 00:17:06.700 fused_ordering(854) 00:17:06.700 fused_ordering(855) 00:17:06.700 fused_ordering(856) 00:17:06.700 fused_ordering(857) 00:17:06.700 fused_ordering(858) 00:17:06.700 fused_ordering(859) 00:17:06.700 fused_ordering(860) 00:17:06.700 fused_ordering(861) 00:17:06.700 fused_ordering(862) 00:17:06.700 fused_ordering(863) 00:17:06.700 fused_ordering(864) 00:17:06.700 fused_ordering(865) 00:17:06.700 fused_ordering(866) 00:17:06.700 fused_ordering(867) 00:17:06.700 fused_ordering(868) 00:17:06.700 fused_ordering(869) 00:17:06.700 fused_ordering(870) 00:17:06.700 fused_ordering(871) 00:17:06.700 fused_ordering(872) 00:17:06.700 fused_ordering(873) 00:17:06.700 fused_ordering(874) 00:17:06.700 fused_ordering(875) 00:17:06.700 fused_ordering(876) 00:17:06.700 fused_ordering(877) 00:17:06.700 fused_ordering(878) 00:17:06.700 fused_ordering(879) 00:17:06.700 fused_ordering(880) 00:17:06.700 fused_ordering(881) 00:17:06.700 fused_ordering(882) 00:17:06.700 fused_ordering(883) 00:17:06.700 fused_ordering(884) 00:17:06.700 fused_ordering(885) 00:17:06.700 fused_ordering(886) 00:17:06.700 fused_ordering(887) 00:17:06.700 fused_ordering(888) 00:17:06.700 fused_ordering(889) 00:17:06.700 fused_ordering(890) 00:17:06.700 fused_ordering(891) 00:17:06.700 fused_ordering(892) 00:17:06.700 fused_ordering(893) 00:17:06.700 fused_ordering(894) 00:17:06.700 fused_ordering(895) 00:17:06.700 fused_ordering(896) 00:17:06.700 fused_ordering(897) 00:17:06.700 fused_ordering(898) 00:17:06.700 fused_ordering(899) 00:17:06.700 fused_ordering(900) 00:17:06.700 fused_ordering(901) 00:17:06.700 fused_ordering(902) 00:17:06.700 fused_ordering(903) 00:17:06.700 fused_ordering(904) 00:17:06.700 fused_ordering(905) 00:17:06.700 fused_ordering(906) 00:17:06.700 fused_ordering(907) 00:17:06.700 fused_ordering(908) 00:17:06.700 fused_ordering(909) 00:17:06.700 fused_ordering(910) 00:17:06.700 fused_ordering(911) 00:17:06.700 fused_ordering(912) 00:17:06.700 fused_ordering(913) 00:17:06.700 fused_ordering(914) 00:17:06.700 fused_ordering(915) 00:17:06.700 fused_ordering(916) 00:17:06.700 fused_ordering(917) 00:17:06.700 fused_ordering(918) 00:17:06.700 fused_ordering(919) 00:17:06.700 fused_ordering(920) 00:17:06.700 fused_ordering(921) 00:17:06.700 fused_ordering(922) 00:17:06.700 fused_ordering(923) 00:17:06.700 fused_ordering(924) 00:17:06.700 fused_ordering(925) 00:17:06.700 fused_ordering(926) 00:17:06.700 fused_ordering(927) 00:17:06.700 fused_ordering(928) 00:17:06.700 fused_ordering(929) 00:17:06.700 fused_ordering(930) 00:17:06.700 fused_ordering(931) 00:17:06.700 fused_ordering(932) 00:17:06.700 fused_ordering(933) 00:17:06.700 fused_ordering(934) 00:17:06.700 fused_ordering(935) 00:17:06.700 fused_ordering(936) 00:17:06.700 fused_ordering(937) 00:17:06.700 fused_ordering(938) 00:17:06.700 fused_ordering(939) 00:17:06.700 fused_ordering(940) 00:17:06.700 fused_ordering(941) 00:17:06.700 fused_ordering(942) 00:17:06.700 fused_ordering(943) 00:17:06.700 fused_ordering(944) 00:17:06.700 fused_ordering(945) 00:17:06.700 fused_ordering(946) 00:17:06.700 fused_ordering(947) 00:17:06.700 fused_ordering(948) 00:17:06.700 fused_ordering(949) 00:17:06.700 fused_ordering(950) 00:17:06.700 fused_ordering(951) 00:17:06.700 fused_ordering(952) 00:17:06.700 fused_ordering(953) 00:17:06.700 fused_ordering(954) 00:17:06.700 fused_ordering(955) 00:17:06.700 fused_ordering(956) 00:17:06.700 fused_ordering(957) 00:17:06.700 fused_ordering(958) 00:17:06.700 fused_ordering(959) 00:17:06.700 fused_ordering(960) 00:17:06.700 fused_ordering(961) 00:17:06.700 fused_ordering(962) 00:17:06.700 fused_ordering(963) 00:17:06.700 fused_ordering(964) 00:17:06.700 fused_ordering(965) 00:17:06.700 fused_ordering(966) 00:17:06.700 fused_ordering(967) 00:17:06.700 fused_ordering(968) 00:17:06.700 fused_ordering(969) 00:17:06.700 fused_ordering(970) 00:17:06.700 fused_ordering(971) 00:17:06.700 fused_ordering(972) 00:17:06.700 fused_ordering(973) 00:17:06.700 fused_ordering(974) 00:17:06.700 fused_ordering(975) 00:17:06.700 fused_ordering(976) 00:17:06.700 fused_ordering(977) 00:17:06.700 fused_ordering(978) 00:17:06.700 fused_ordering(979) 00:17:06.700 fused_ordering(980) 00:17:06.700 fused_ordering(981) 00:17:06.700 fused_ordering(982) 00:17:06.700 fused_ordering(983) 00:17:06.700 fused_ordering(984) 00:17:06.700 fused_ordering(985) 00:17:06.700 fused_ordering(986) 00:17:06.700 fused_ordering(987) 00:17:06.700 fused_ordering(988) 00:17:06.700 fused_ordering(989) 00:17:06.700 fused_ordering(990) 00:17:06.700 fused_ordering(991) 00:17:06.700 fused_ordering(992) 00:17:06.700 fused_ordering(993) 00:17:06.700 fused_ordering(994) 00:17:06.700 fused_ordering(995) 00:17:06.700 fused_ordering(996) 00:17:06.700 fused_ordering(997) 00:17:06.700 fused_ordering(998) 00:17:06.700 fused_ordering(999) 00:17:06.700 fused_ordering(1000) 00:17:06.700 fused_ordering(1001) 00:17:06.700 fused_ordering(1002) 00:17:06.700 fused_ordering(1003) 00:17:06.700 fused_ordering(1004) 00:17:06.700 fused_ordering(1005) 00:17:06.700 fused_ordering(1006) 00:17:06.700 fused_ordering(1007) 00:17:06.700 fused_ordering(1008) 00:17:06.700 fused_ordering(1009) 00:17:06.700 fused_ordering(1010) 00:17:06.700 fused_ordering(1011) 00:17:06.700 fused_ordering(1012) 00:17:06.700 fused_ordering(1013) 00:17:06.700 fused_ordering(1014) 00:17:06.700 fused_ordering(1015) 00:17:06.700 fused_ordering(1016) 00:17:06.700 fused_ordering(1017) 00:17:06.700 fused_ordering(1018) 00:17:06.700 fused_ordering(1019) 00:17:06.700 fused_ordering(1020) 00:17:06.700 fused_ordering(1021) 00:17:06.700 fused_ordering(1022) 00:17:06.700 fused_ordering(1023) 00:17:06.700 05:19:23 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:06.700 05:19:23 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:06.700 05:19:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.700 05:19:23 -- nvmf/common.sh@116 -- # sync 00:17:06.700 05:19:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:06.700 05:19:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:06.700 05:19:23 -- nvmf/common.sh@119 -- # set +e 00:17:06.700 05:19:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.700 05:19:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:06.701 rmmod nvme_rdma 00:17:06.701 rmmod nvme_fabrics 00:17:06.701 05:19:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.701 05:19:23 -- nvmf/common.sh@123 -- # set -e 00:17:06.701 05:19:23 -- nvmf/common.sh@124 -- # return 0 00:17:06.701 05:19:23 -- nvmf/common.sh@477 -- # '[' -n 1789433 ']' 00:17:06.701 05:19:23 -- nvmf/common.sh@478 -- # killprocess 1789433 00:17:06.701 05:19:23 -- common/autotest_common.sh@936 -- # '[' -z 1789433 ']' 00:17:06.701 05:19:23 -- common/autotest_common.sh@940 -- # kill -0 1789433 00:17:06.701 05:19:23 -- common/autotest_common.sh@941 -- # uname 00:17:06.701 05:19:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.701 05:19:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1789433 00:17:06.701 05:19:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:06.701 05:19:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:06.701 05:19:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1789433' 00:17:06.701 killing process with pid 1789433 00:17:06.701 05:19:23 -- common/autotest_common.sh@955 -- # kill 1789433 00:17:06.701 05:19:23 -- common/autotest_common.sh@960 -- # wait 1789433 00:17:06.959 05:19:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.959 05:19:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:06.959 00:17:06.959 real 0m8.951s 00:17:06.959 user 0m4.765s 00:17:06.959 sys 0m5.544s 00:17:06.959 05:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:06.959 05:19:23 -- common/autotest_common.sh@10 -- # set +x 00:17:06.959 ************************************ 00:17:06.959 END TEST nvmf_fused_ordering 00:17:06.959 ************************************ 00:17:06.959 05:19:23 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:06.959 05:19:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:06.959 05:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.959 05:19:23 -- common/autotest_common.sh@10 -- # set +x 00:17:06.959 ************************************ 00:17:06.959 START TEST nvmf_delete_subsystem 00:17:06.959 ************************************ 00:17:06.959 05:19:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:07.218 * Looking for test storage... 00:17:07.218 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:07.218 05:19:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:07.218 05:19:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:07.218 05:19:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:07.218 05:19:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:07.218 05:19:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:07.218 05:19:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:07.218 05:19:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:07.218 05:19:23 -- scripts/common.sh@335 -- # IFS=.-: 00:17:07.218 05:19:23 -- scripts/common.sh@335 -- # read -ra ver1 00:17:07.218 05:19:23 -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.218 05:19:23 -- scripts/common.sh@336 -- # read -ra ver2 00:17:07.218 05:19:23 -- scripts/common.sh@337 -- # local 'op=<' 00:17:07.218 05:19:23 -- scripts/common.sh@339 -- # ver1_l=2 00:17:07.218 05:19:23 -- scripts/common.sh@340 -- # ver2_l=1 00:17:07.218 05:19:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:07.218 05:19:23 -- scripts/common.sh@343 -- # case "$op" in 00:17:07.218 05:19:23 -- scripts/common.sh@344 -- # : 1 00:17:07.218 05:19:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:07.218 05:19:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.218 05:19:23 -- scripts/common.sh@364 -- # decimal 1 00:17:07.218 05:19:23 -- scripts/common.sh@352 -- # local d=1 00:17:07.218 05:19:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.218 05:19:23 -- scripts/common.sh@354 -- # echo 1 00:17:07.218 05:19:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:07.218 05:19:23 -- scripts/common.sh@365 -- # decimal 2 00:17:07.218 05:19:23 -- scripts/common.sh@352 -- # local d=2 00:17:07.218 05:19:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.218 05:19:23 -- scripts/common.sh@354 -- # echo 2 00:17:07.218 05:19:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:07.218 05:19:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:07.218 05:19:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:07.218 05:19:23 -- scripts/common.sh@367 -- # return 0 00:17:07.218 05:19:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.218 05:19:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:07.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.218 --rc genhtml_branch_coverage=1 00:17:07.218 --rc genhtml_function_coverage=1 00:17:07.218 --rc genhtml_legend=1 00:17:07.218 --rc geninfo_all_blocks=1 00:17:07.218 --rc geninfo_unexecuted_blocks=1 00:17:07.218 00:17:07.218 ' 00:17:07.218 05:19:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:07.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.218 --rc genhtml_branch_coverage=1 00:17:07.218 --rc genhtml_function_coverage=1 00:17:07.218 --rc genhtml_legend=1 00:17:07.218 --rc geninfo_all_blocks=1 00:17:07.218 --rc geninfo_unexecuted_blocks=1 00:17:07.218 00:17:07.218 ' 00:17:07.218 05:19:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:07.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.218 --rc genhtml_branch_coverage=1 00:17:07.218 --rc genhtml_function_coverage=1 00:17:07.218 --rc genhtml_legend=1 00:17:07.218 --rc geninfo_all_blocks=1 00:17:07.218 --rc geninfo_unexecuted_blocks=1 00:17:07.218 00:17:07.218 ' 00:17:07.218 05:19:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:07.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.218 --rc genhtml_branch_coverage=1 00:17:07.218 --rc genhtml_function_coverage=1 00:17:07.218 --rc genhtml_legend=1 00:17:07.218 --rc geninfo_all_blocks=1 00:17:07.218 --rc geninfo_unexecuted_blocks=1 00:17:07.218 00:17:07.218 ' 00:17:07.218 05:19:23 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.218 05:19:23 -- nvmf/common.sh@7 -- # uname -s 00:17:07.218 05:19:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.218 05:19:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.218 05:19:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.218 05:19:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.218 05:19:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.218 05:19:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.218 05:19:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.218 05:19:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.218 05:19:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.218 05:19:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.218 05:19:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:07.218 05:19:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:07.218 05:19:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.218 05:19:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.218 05:19:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.218 05:19:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:07.218 05:19:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.218 05:19:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.218 05:19:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.218 05:19:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.218 05:19:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.219 05:19:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.219 05:19:23 -- paths/export.sh@5 -- # export PATH 00:17:07.219 05:19:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.219 05:19:23 -- nvmf/common.sh@46 -- # : 0 00:17:07.219 05:19:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:07.219 05:19:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:07.219 05:19:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:07.219 05:19:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.219 05:19:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.219 05:19:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:07.219 05:19:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:07.219 05:19:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:07.219 05:19:23 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:07.219 05:19:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:07.219 05:19:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.219 05:19:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:07.219 05:19:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:07.219 05:19:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:07.219 05:19:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.219 05:19:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.219 05:19:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.219 05:19:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:07.219 05:19:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:07.219 05:19:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:07.219 05:19:23 -- common/autotest_common.sh@10 -- # set +x 00:17:15.344 05:19:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:15.344 05:19:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:15.344 05:19:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:15.344 05:19:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:15.344 05:19:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:15.344 05:19:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:15.344 05:19:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:15.344 05:19:30 -- nvmf/common.sh@294 -- # net_devs=() 00:17:15.344 05:19:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:15.344 05:19:30 -- nvmf/common.sh@295 -- # e810=() 00:17:15.344 05:19:30 -- nvmf/common.sh@295 -- # local -ga e810 00:17:15.344 05:19:30 -- nvmf/common.sh@296 -- # x722=() 00:17:15.344 05:19:30 -- nvmf/common.sh@296 -- # local -ga x722 00:17:15.344 05:19:30 -- nvmf/common.sh@297 -- # mlx=() 00:17:15.344 05:19:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:15.344 05:19:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.344 05:19:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:15.344 05:19:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:15.344 05:19:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:15.344 05:19:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:15.344 05:19:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:15.344 05:19:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:15.344 05:19:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:15.344 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:15.344 05:19:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:15.344 05:19:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:15.344 05:19:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:15.344 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:15.344 05:19:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:15.344 05:19:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:15.344 05:19:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:15.344 05:19:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:15.344 05:19:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.344 05:19:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:15.344 05:19:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.344 05:19:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:15.344 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:15.344 05:19:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.344 05:19:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:15.344 05:19:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.344 05:19:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:15.344 05:19:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.344 05:19:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:15.344 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:15.344 05:19:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.345 05:19:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:15.345 05:19:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:15.345 05:19:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:15.345 05:19:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:15.345 05:19:30 -- nvmf/common.sh@57 -- # uname 00:17:15.345 05:19:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:15.345 05:19:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:15.345 05:19:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:15.345 05:19:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:15.345 05:19:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:15.345 05:19:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:15.345 05:19:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:15.345 05:19:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:15.345 05:19:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:15.345 05:19:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:15.345 05:19:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:15.345 05:19:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:15.345 05:19:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:15.345 05:19:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:15.345 05:19:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:15.345 05:19:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:15.345 05:19:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@104 -- # continue 2 00:17:15.345 05:19:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@104 -- # continue 2 00:17:15.345 05:19:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:15.345 05:19:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.345 05:19:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:15.345 05:19:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:15.345 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:15.345 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:15.345 altname enp217s0f0np0 00:17:15.345 altname ens818f0np0 00:17:15.345 inet 192.168.100.8/24 scope global mlx_0_0 00:17:15.345 valid_lft forever preferred_lft forever 00:17:15.345 05:19:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:15.345 05:19:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.345 05:19:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:15.345 05:19:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:15.345 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:15.345 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:15.345 altname enp217s0f1np1 00:17:15.345 altname ens818f1np1 00:17:15.345 inet 192.168.100.9/24 scope global mlx_0_1 00:17:15.345 valid_lft forever preferred_lft forever 00:17:15.345 05:19:30 -- nvmf/common.sh@410 -- # return 0 00:17:15.345 05:19:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:15.345 05:19:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:15.345 05:19:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:15.345 05:19:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:15.345 05:19:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:15.345 05:19:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:15.345 05:19:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:15.345 05:19:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:15.345 05:19:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:15.345 05:19:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@104 -- # continue 2 00:17:15.345 05:19:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.345 05:19:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:15.345 05:19:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@104 -- # continue 2 00:17:15.345 05:19:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:15.345 05:19:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:15.345 05:19:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:15.345 05:19:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.345 05:19:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.345 05:19:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:15.345 192.168.100.9' 00:17:15.345 05:19:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:15.345 192.168.100.9' 00:17:15.345 05:19:30 -- nvmf/common.sh@445 -- # head -n 1 00:17:15.345 05:19:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:15.345 05:19:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:15.345 192.168.100.9' 00:17:15.345 05:19:30 -- nvmf/common.sh@446 -- # tail -n +2 00:17:15.345 05:19:30 -- nvmf/common.sh@446 -- # head -n 1 00:17:15.345 05:19:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:15.345 05:19:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:15.345 05:19:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:15.345 05:19:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:15.345 05:19:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:15.345 05:19:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:15.345 05:19:30 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:15.345 05:19:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.345 05:19:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.345 05:19:30 -- common/autotest_common.sh@10 -- # set +x 00:17:15.345 05:19:30 -- nvmf/common.sh@469 -- # nvmfpid=1793168 00:17:15.345 05:19:30 -- nvmf/common.sh@470 -- # waitforlisten 1793168 00:17:15.345 05:19:30 -- common/autotest_common.sh@829 -- # '[' -z 1793168 ']' 00:17:15.345 05:19:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.345 05:19:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.345 05:19:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.345 05:19:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.345 05:19:30 -- common/autotest_common.sh@10 -- # set +x 00:17:15.345 05:19:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:15.345 [2024-11-19 05:19:30.696858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:15.345 [2024-11-19 05:19:30.696907] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.345 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.345 [2024-11-19 05:19:30.766063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.345 [2024-11-19 05:19:30.803294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.345 [2024-11-19 05:19:30.803420] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.345 [2024-11-19 05:19:30.803430] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.345 [2024-11-19 05:19:30.803440] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.345 [2024-11-19 05:19:30.803486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.345 [2024-11-19 05:19:30.803489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.345 05:19:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.345 05:19:31 -- common/autotest_common.sh@862 -- # return 0 00:17:15.345 05:19:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:15.345 05:19:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.345 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.345 05:19:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.345 05:19:31 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:15.345 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.345 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.346 [2024-11-19 05:19:31.575060] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b21bb0/0x1b260a0) succeed. 00:17:15.346 [2024-11-19 05:19:31.583979] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b230b0/0x1b67740) succeed. 00:17:15.346 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.346 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.346 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.346 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:15.346 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.346 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.346 [2024-11-19 05:19:31.665052] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:15.346 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:15.346 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.346 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.346 NULL1 00:17:15.346 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:15.346 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.346 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.346 Delay0 00:17:15.346 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:15.346 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.346 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.346 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@28 -- # perf_pid=1793379 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:15.346 05:19:31 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:15.346 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.346 [2024-11-19 05:19:31.771878] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:17.340 05:19:33 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.340 05:19:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.340 05:19:33 -- common/autotest_common.sh@10 -- # set +x 00:17:18.278 NVMe io qpair process completion error 00:17:18.278 NVMe io qpair process completion error 00:17:18.538 NVMe io qpair process completion error 00:17:18.538 NVMe io qpair process completion error 00:17:18.538 NVMe io qpair process completion error 00:17:18.538 NVMe io qpair process completion error 00:17:18.538 05:19:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.538 05:19:34 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:18.538 05:19:34 -- target/delete_subsystem.sh@35 -- # kill -0 1793379 00:17:18.538 05:19:34 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:19.107 05:19:35 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:19.107 05:19:35 -- target/delete_subsystem.sh@35 -- # kill -0 1793379 00:17:19.107 05:19:35 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:19.367 Read completed with error (sct=0, sc=8) 00:17:19.367 starting I/O failed: -6 00:17:19.367 Read completed with error (sct=0, sc=8) 00:17:19.367 starting I/O failed: -6 00:17:19.367 Read completed with error (sct=0, sc=8) 00:17:19.367 starting I/O failed: -6 00:17:19.367 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Write completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.368 Read completed with error (sct=0, sc=8) 00:17:19.368 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 starting I/O failed: -6 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Write completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 Read completed with error (sct=0, sc=8) 00:17:19.369 05:19:35 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:19.369 05:19:35 -- target/delete_subsystem.sh@35 -- # kill -0 1793379 00:17:19.369 05:19:35 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:19.369 [2024-11-19 05:19:35.870769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:19.369 [2024-11-19 05:19:35.870808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.369 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:19.369 Initializing NVMe Controllers 00:17:19.369 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:19.369 Controller IO queue size 128, less than required. 00:17:19.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:19.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:19.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:19.369 Initialization complete. Launching workers. 00:17:19.369 ======================================================== 00:17:19.369 Latency(us) 00:17:19.369 Device Information : IOPS MiB/s Average min max 00:17:19.369 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 72.42 0.04 1768949.66 1000183.84 2978474.63 00:17:19.369 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 88.52 0.04 1454009.89 1000115.00 2979187.04 00:17:19.369 ======================================================== 00:17:19.369 Total : 160.94 0.08 1595732.79 1000115.00 2979187.04 00:17:19.369 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@35 -- # kill -0 1793379 00:17:19.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1793379) - No such process 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@45 -- # NOT wait 1793379 00:17:19.939 05:19:36 -- common/autotest_common.sh@650 -- # local es=0 00:17:19.939 05:19:36 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1793379 00:17:19.939 05:19:36 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:19.939 05:19:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.939 05:19:36 -- common/autotest_common.sh@642 -- # type -t wait 00:17:19.939 05:19:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.939 05:19:36 -- common/autotest_common.sh@653 -- # wait 1793379 00:17:19.939 05:19:36 -- common/autotest_common.sh@653 -- # es=1 00:17:19.939 05:19:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.939 05:19:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.939 05:19:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:19.939 05:19:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.939 05:19:36 -- common/autotest_common.sh@10 -- # set +x 00:17:19.939 05:19:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:19.939 05:19:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.939 05:19:36 -- common/autotest_common.sh@10 -- # set +x 00:17:19.939 [2024-11-19 05:19:36.388234] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:19.939 05:19:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:19.939 05:19:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.939 05:19:36 -- common/autotest_common.sh@10 -- # set +x 00:17:19.939 05:19:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@54 -- # perf_pid=1794197 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:19.939 05:19:36 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:19.939 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.939 [2024-11-19 05:19:36.475673] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:20.509 05:19:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:20.509 05:19:36 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:20.509 05:19:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:21.077 05:19:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:21.077 05:19:37 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:21.077 05:19:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:21.646 05:19:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:21.646 05:19:37 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:21.646 05:19:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:21.906 05:19:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:21.906 05:19:38 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:21.906 05:19:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:22.475 05:19:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:22.475 05:19:38 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:22.475 05:19:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:23.043 05:19:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:23.043 05:19:39 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:23.043 05:19:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:23.612 05:19:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:23.612 05:19:39 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:23.612 05:19:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:23.872 05:19:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:23.872 05:19:40 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:23.872 05:19:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:24.441 05:19:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:24.441 05:19:40 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:24.441 05:19:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:25.009 05:19:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:25.009 05:19:41 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:25.009 05:19:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:25.576 05:19:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:25.576 05:19:41 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:25.576 05:19:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:26.144 05:19:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:26.144 05:19:42 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:26.144 05:19:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:26.404 05:19:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:26.404 05:19:42 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:26.404 05:19:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:26.973 05:19:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:26.973 05:19:43 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:26.973 05:19:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:27.232 Initializing NVMe Controllers 00:17:27.232 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.232 Controller IO queue size 128, less than required. 00:17:27.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:27.232 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:27.232 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:27.232 Initialization complete. Launching workers. 00:17:27.232 ======================================================== 00:17:27.232 Latency(us) 00:17:27.232 Device Information : IOPS MiB/s Average min max 00:17:27.232 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001181.62 1000050.25 1003989.53 00:17:27.233 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002398.83 1000068.22 1005734.74 00:17:27.233 ======================================================== 00:17:27.233 Total : 256.00 0.12 1001790.23 1000050.25 1005734.74 00:17:27.233 00:17:27.492 05:19:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:27.492 05:19:43 -- target/delete_subsystem.sh@57 -- # kill -0 1794197 00:17:27.492 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1794197) - No such process 00:17:27.492 05:19:43 -- target/delete_subsystem.sh@67 -- # wait 1794197 00:17:27.492 05:19:43 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:27.492 05:19:43 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:27.492 05:19:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:27.493 05:19:43 -- nvmf/common.sh@116 -- # sync 00:17:27.493 05:19:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:27.493 05:19:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:27.493 05:19:43 -- nvmf/common.sh@119 -- # set +e 00:17:27.493 05:19:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:27.493 05:19:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:27.493 rmmod nvme_rdma 00:17:27.493 rmmod nvme_fabrics 00:17:27.493 05:19:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:27.493 05:19:44 -- nvmf/common.sh@123 -- # set -e 00:17:27.493 05:19:44 -- nvmf/common.sh@124 -- # return 0 00:17:27.493 05:19:44 -- nvmf/common.sh@477 -- # '[' -n 1793168 ']' 00:17:27.493 05:19:44 -- nvmf/common.sh@478 -- # killprocess 1793168 00:17:27.493 05:19:44 -- common/autotest_common.sh@936 -- # '[' -z 1793168 ']' 00:17:27.493 05:19:44 -- common/autotest_common.sh@940 -- # kill -0 1793168 00:17:27.493 05:19:44 -- common/autotest_common.sh@941 -- # uname 00:17:27.493 05:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.493 05:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1793168 00:17:27.752 05:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:27.752 05:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:27.752 05:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1793168' 00:17:27.752 killing process with pid 1793168 00:17:27.752 05:19:44 -- common/autotest_common.sh@955 -- # kill 1793168 00:17:27.752 05:19:44 -- common/autotest_common.sh@960 -- # wait 1793168 00:17:27.752 05:19:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:27.752 05:19:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:27.752 00:17:27.752 real 0m20.873s 00:17:27.752 user 0m50.299s 00:17:27.752 sys 0m6.523s 00:17:27.752 05:19:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:27.752 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:27.752 ************************************ 00:17:27.752 END TEST nvmf_delete_subsystem 00:17:27.752 ************************************ 00:17:28.012 05:19:44 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:28.012 05:19:44 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:28.012 05:19:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:28.012 05:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:28.012 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:28.012 ************************************ 00:17:28.012 START TEST nvmf_nvme_cli 00:17:28.012 ************************************ 00:17:28.012 05:19:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:28.012 * Looking for test storage... 00:17:28.012 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:28.012 05:19:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:28.012 05:19:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:28.012 05:19:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:28.012 05:19:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:28.012 05:19:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:28.012 05:19:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:28.012 05:19:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:28.012 05:19:44 -- scripts/common.sh@335 -- # IFS=.-: 00:17:28.012 05:19:44 -- scripts/common.sh@335 -- # read -ra ver1 00:17:28.012 05:19:44 -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.012 05:19:44 -- scripts/common.sh@336 -- # read -ra ver2 00:17:28.012 05:19:44 -- scripts/common.sh@337 -- # local 'op=<' 00:17:28.012 05:19:44 -- scripts/common.sh@339 -- # ver1_l=2 00:17:28.012 05:19:44 -- scripts/common.sh@340 -- # ver2_l=1 00:17:28.012 05:19:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:28.012 05:19:44 -- scripts/common.sh@343 -- # case "$op" in 00:17:28.012 05:19:44 -- scripts/common.sh@344 -- # : 1 00:17:28.012 05:19:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:28.012 05:19:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.012 05:19:44 -- scripts/common.sh@364 -- # decimal 1 00:17:28.012 05:19:44 -- scripts/common.sh@352 -- # local d=1 00:17:28.012 05:19:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.012 05:19:44 -- scripts/common.sh@354 -- # echo 1 00:17:28.012 05:19:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:28.012 05:19:44 -- scripts/common.sh@365 -- # decimal 2 00:17:28.012 05:19:44 -- scripts/common.sh@352 -- # local d=2 00:17:28.012 05:19:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.012 05:19:44 -- scripts/common.sh@354 -- # echo 2 00:17:28.012 05:19:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:28.012 05:19:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:28.012 05:19:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:28.012 05:19:44 -- scripts/common.sh@367 -- # return 0 00:17:28.012 05:19:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.012 05:19:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:28.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.012 --rc genhtml_branch_coverage=1 00:17:28.012 --rc genhtml_function_coverage=1 00:17:28.012 --rc genhtml_legend=1 00:17:28.012 --rc geninfo_all_blocks=1 00:17:28.012 --rc geninfo_unexecuted_blocks=1 00:17:28.012 00:17:28.012 ' 00:17:28.012 05:19:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:28.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.012 --rc genhtml_branch_coverage=1 00:17:28.012 --rc genhtml_function_coverage=1 00:17:28.012 --rc genhtml_legend=1 00:17:28.012 --rc geninfo_all_blocks=1 00:17:28.012 --rc geninfo_unexecuted_blocks=1 00:17:28.012 00:17:28.012 ' 00:17:28.012 05:19:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:28.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.012 --rc genhtml_branch_coverage=1 00:17:28.012 --rc genhtml_function_coverage=1 00:17:28.012 --rc genhtml_legend=1 00:17:28.012 --rc geninfo_all_blocks=1 00:17:28.012 --rc geninfo_unexecuted_blocks=1 00:17:28.012 00:17:28.012 ' 00:17:28.012 05:19:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:28.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.012 --rc genhtml_branch_coverage=1 00:17:28.012 --rc genhtml_function_coverage=1 00:17:28.012 --rc genhtml_legend=1 00:17:28.012 --rc geninfo_all_blocks=1 00:17:28.012 --rc geninfo_unexecuted_blocks=1 00:17:28.012 00:17:28.012 ' 00:17:28.012 05:19:44 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.012 05:19:44 -- nvmf/common.sh@7 -- # uname -s 00:17:28.012 05:19:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.012 05:19:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.012 05:19:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.012 05:19:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.012 05:19:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.012 05:19:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.012 05:19:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.012 05:19:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.012 05:19:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.012 05:19:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.012 05:19:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:28.012 05:19:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:28.012 05:19:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.012 05:19:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.012 05:19:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.012 05:19:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:28.012 05:19:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.012 05:19:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.012 05:19:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.012 05:19:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.012 05:19:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.012 05:19:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.012 05:19:44 -- paths/export.sh@5 -- # export PATH 00:17:28.013 05:19:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.013 05:19:44 -- nvmf/common.sh@46 -- # : 0 00:17:28.013 05:19:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:28.013 05:19:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:28.013 05:19:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:28.013 05:19:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.013 05:19:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.013 05:19:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:28.013 05:19:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:28.013 05:19:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:28.013 05:19:44 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.013 05:19:44 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.013 05:19:44 -- target/nvme_cli.sh@14 -- # devs=() 00:17:28.013 05:19:44 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:28.013 05:19:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:28.013 05:19:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.013 05:19:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:28.013 05:19:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:28.013 05:19:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:28.013 05:19:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.013 05:19:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.013 05:19:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.013 05:19:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:28.013 05:19:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:28.013 05:19:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:28.013 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:34.584 05:19:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:34.584 05:19:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:34.584 05:19:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:34.584 05:19:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:34.584 05:19:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:34.584 05:19:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:34.584 05:19:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:34.584 05:19:50 -- nvmf/common.sh@294 -- # net_devs=() 00:17:34.584 05:19:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:34.584 05:19:50 -- nvmf/common.sh@295 -- # e810=() 00:17:34.584 05:19:50 -- nvmf/common.sh@295 -- # local -ga e810 00:17:34.584 05:19:50 -- nvmf/common.sh@296 -- # x722=() 00:17:34.584 05:19:50 -- nvmf/common.sh@296 -- # local -ga x722 00:17:34.584 05:19:50 -- nvmf/common.sh@297 -- # mlx=() 00:17:34.584 05:19:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:34.584 05:19:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.584 05:19:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:34.584 05:19:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:34.584 05:19:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:34.584 05:19:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:34.584 05:19:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:34.584 05:19:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:34.584 05:19:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:34.584 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:34.584 05:19:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:34.584 05:19:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:34.584 05:19:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:34.584 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:34.584 05:19:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:34.584 05:19:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:34.584 05:19:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:34.584 05:19:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.584 05:19:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:34.584 05:19:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.584 05:19:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:34.584 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:34.584 05:19:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.584 05:19:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:34.584 05:19:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.584 05:19:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:34.584 05:19:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.584 05:19:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:34.584 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:34.584 05:19:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.584 05:19:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:34.584 05:19:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:34.584 05:19:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:34.584 05:19:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:34.584 05:19:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:34.584 05:19:50 -- nvmf/common.sh@57 -- # uname 00:17:34.584 05:19:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:34.584 05:19:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:34.584 05:19:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:34.584 05:19:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:34.584 05:19:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:34.584 05:19:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:34.584 05:19:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:34.584 05:19:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:34.584 05:19:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:34.584 05:19:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:34.584 05:19:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:34.584 05:19:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:34.584 05:19:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:34.584 05:19:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:34.584 05:19:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:34.584 05:19:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:34.584 05:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.584 05:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.584 05:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:34.584 05:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:34.584 05:19:51 -- nvmf/common.sh@104 -- # continue 2 00:17:34.584 05:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@104 -- # continue 2 00:17:34.585 05:19:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:34.585 05:19:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:34.585 05:19:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.585 05:19:51 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:34.585 05:19:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:34.585 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:34.585 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:34.585 altname enp217s0f0np0 00:17:34.585 altname ens818f0np0 00:17:34.585 inet 192.168.100.8/24 scope global mlx_0_0 00:17:34.585 valid_lft forever preferred_lft forever 00:17:34.585 05:19:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:34.585 05:19:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.585 05:19:51 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:34.585 05:19:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:34.585 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:34.585 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:34.585 altname enp217s0f1np1 00:17:34.585 altname ens818f1np1 00:17:34.585 inet 192.168.100.9/24 scope global mlx_0_1 00:17:34.585 valid_lft forever preferred_lft forever 00:17:34.585 05:19:51 -- nvmf/common.sh@410 -- # return 0 00:17:34.585 05:19:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:34.585 05:19:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:34.585 05:19:51 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:34.585 05:19:51 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:34.585 05:19:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:34.585 05:19:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:34.585 05:19:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:34.585 05:19:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:34.585 05:19:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:34.585 05:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:34.585 05:19:51 -- nvmf/common.sh@104 -- # continue 2 00:17:34.585 05:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.585 05:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:34.585 05:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@104 -- # continue 2 00:17:34.585 05:19:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:34.585 05:19:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:34.585 05:19:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.585 05:19:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:34.585 05:19:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.585 05:19:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.585 05:19:51 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:34.585 192.168.100.9' 00:17:34.585 05:19:51 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:34.585 192.168.100.9' 00:17:34.585 05:19:51 -- nvmf/common.sh@445 -- # head -n 1 00:17:34.585 05:19:51 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:34.585 05:19:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:34.585 192.168.100.9' 00:17:34.585 05:19:51 -- nvmf/common.sh@446 -- # tail -n +2 00:17:34.585 05:19:51 -- nvmf/common.sh@446 -- # head -n 1 00:17:34.585 05:19:51 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:34.585 05:19:51 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:34.585 05:19:51 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:34.585 05:19:51 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:34.585 05:19:51 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:34.585 05:19:51 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:34.845 05:19:51 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:34.845 05:19:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:34.845 05:19:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.845 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:17:34.845 05:19:51 -- nvmf/common.sh@469 -- # nvmfpid=1798814 00:17:34.845 05:19:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:34.845 05:19:51 -- nvmf/common.sh@470 -- # waitforlisten 1798814 00:17:34.845 05:19:51 -- common/autotest_common.sh@829 -- # '[' -z 1798814 ']' 00:17:34.845 05:19:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.845 05:19:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.845 05:19:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.845 05:19:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.845 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:17:34.845 [2024-11-19 05:19:51.213597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:34.845 [2024-11-19 05:19:51.213655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.845 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.845 [2024-11-19 05:19:51.284023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.845 [2024-11-19 05:19:51.324328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.845 [2024-11-19 05:19:51.324437] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.845 [2024-11-19 05:19:51.324447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.845 [2024-11-19 05:19:51.324455] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.845 [2024-11-19 05:19:51.324497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.845 [2024-11-19 05:19:51.324631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.845 [2024-11-19 05:19:51.324653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.845 [2024-11-19 05:19:51.324655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.785 05:19:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.785 05:19:52 -- common/autotest_common.sh@862 -- # return 0 00:17:35.785 05:19:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:35.785 05:19:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 05:19:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.785 05:19:52 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 [2024-11-19 05:19:52.118720] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e0d200/0x1e116f0) succeed. 00:17:35.785 [2024-11-19 05:19:52.127856] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e0e7f0/0x1e52d90) succeed. 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 Malloc0 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 Malloc1 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 [2024-11-19 05:19:52.323390] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:35.785 05:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.785 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 05:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.785 05:19:52 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:36.044 00:17:36.044 Discovery Log Number of Records 2, Generation counter 2 00:17:36.044 =====Discovery Log Entry 0====== 00:17:36.044 trtype: rdma 00:17:36.044 adrfam: ipv4 00:17:36.044 subtype: current discovery subsystem 00:17:36.044 treq: not required 00:17:36.044 portid: 0 00:17:36.044 trsvcid: 4420 00:17:36.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:36.044 traddr: 192.168.100.8 00:17:36.044 eflags: explicit discovery connections, duplicate discovery information 00:17:36.044 rdma_prtype: not specified 00:17:36.044 rdma_qptype: connected 00:17:36.044 rdma_cms: rdma-cm 00:17:36.044 rdma_pkey: 0x0000 00:17:36.044 =====Discovery Log Entry 1====== 00:17:36.044 trtype: rdma 00:17:36.044 adrfam: ipv4 00:17:36.044 subtype: nvme subsystem 00:17:36.044 treq: not required 00:17:36.044 portid: 0 00:17:36.044 trsvcid: 4420 00:17:36.044 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:36.044 traddr: 192.168.100.8 00:17:36.044 eflags: none 00:17:36.044 rdma_prtype: not specified 00:17:36.044 rdma_qptype: connected 00:17:36.044 rdma_cms: rdma-cm 00:17:36.044 rdma_pkey: 0x0000 00:17:36.044 05:19:52 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:36.044 05:19:52 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:36.044 05:19:52 -- nvmf/common.sh@510 -- # local dev _ 00:17:36.044 05:19:52 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:36.044 05:19:52 -- nvmf/common.sh@509 -- # nvme list 00:17:36.044 05:19:52 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:36.044 05:19:52 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:36.044 05:19:52 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:36.044 05:19:52 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:36.044 05:19:52 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:36.044 05:19:52 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:36.983 05:19:53 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:36.983 05:19:53 -- common/autotest_common.sh@1187 -- # local i=0 00:17:36.983 05:19:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:36.983 05:19:53 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:17:36.983 05:19:53 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:17:36.983 05:19:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:38.890 05:19:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:38.890 05:19:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:38.890 05:19:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.890 05:19:55 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:17:38.890 05:19:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.890 05:19:55 -- common/autotest_common.sh@1197 -- # return 0 00:17:38.890 05:19:55 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:38.890 05:19:55 -- nvmf/common.sh@510 -- # local dev _ 00:17:38.890 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:38.890 05:19:55 -- nvmf/common.sh@509 -- # nvme list 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:39.150 /dev/nvme0n2 ]] 00:17:39.150 05:19:55 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:39.150 05:19:55 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:39.150 05:19:55 -- nvmf/common.sh@510 -- # local dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@509 -- # nvme list 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:39.150 05:19:55 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:39.150 05:19:55 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.150 05:19:55 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:39.150 05:19:55 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.089 05:19:56 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.089 05:19:56 -- common/autotest_common.sh@1208 -- # local i=0 00:17:40.089 05:19:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:40.089 05:19:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.089 05:19:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:40.089 05:19:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.089 05:19:56 -- common/autotest_common.sh@1220 -- # return 0 00:17:40.089 05:19:56 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:40.089 05:19:56 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.089 05:19:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.089 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:17:40.089 05:19:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.089 05:19:56 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:40.089 05:19:56 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:40.089 05:19:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:40.089 05:19:56 -- nvmf/common.sh@116 -- # sync 00:17:40.089 05:19:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:40.089 05:19:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:40.089 05:19:56 -- nvmf/common.sh@119 -- # set +e 00:17:40.089 05:19:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:40.089 05:19:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:40.089 rmmod nvme_rdma 00:17:40.089 rmmod nvme_fabrics 00:17:40.089 05:19:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:40.089 05:19:56 -- nvmf/common.sh@123 -- # set -e 00:17:40.089 05:19:56 -- nvmf/common.sh@124 -- # return 0 00:17:40.089 05:19:56 -- nvmf/common.sh@477 -- # '[' -n 1798814 ']' 00:17:40.089 05:19:56 -- nvmf/common.sh@478 -- # killprocess 1798814 00:17:40.089 05:19:56 -- common/autotest_common.sh@936 -- # '[' -z 1798814 ']' 00:17:40.089 05:19:56 -- common/autotest_common.sh@940 -- # kill -0 1798814 00:17:40.089 05:19:56 -- common/autotest_common.sh@941 -- # uname 00:17:40.089 05:19:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.089 05:19:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1798814 00:17:40.089 05:19:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:40.089 05:19:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:40.089 05:19:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1798814' 00:17:40.089 killing process with pid 1798814 00:17:40.089 05:19:56 -- common/autotest_common.sh@955 -- # kill 1798814 00:17:40.089 05:19:56 -- common/autotest_common.sh@960 -- # wait 1798814 00:17:40.349 05:19:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:40.349 05:19:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:40.608 00:17:40.608 real 0m12.559s 00:17:40.608 user 0m24.159s 00:17:40.608 sys 0m5.674s 00:17:40.608 05:19:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:40.608 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:17:40.608 ************************************ 00:17:40.608 END TEST nvmf_nvme_cli 00:17:40.608 ************************************ 00:17:40.608 05:19:56 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:40.608 05:19:56 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:40.608 05:19:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:40.608 05:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.608 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:17:40.608 ************************************ 00:17:40.608 START TEST nvmf_host_management 00:17:40.608 ************************************ 00:17:40.608 05:19:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:40.608 * Looking for test storage... 00:17:40.608 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:40.608 05:19:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:40.608 05:19:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:40.608 05:19:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:40.608 05:19:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:40.608 05:19:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:40.608 05:19:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:40.608 05:19:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:40.608 05:19:57 -- scripts/common.sh@335 -- # IFS=.-: 00:17:40.608 05:19:57 -- scripts/common.sh@335 -- # read -ra ver1 00:17:40.608 05:19:57 -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.608 05:19:57 -- scripts/common.sh@336 -- # read -ra ver2 00:17:40.609 05:19:57 -- scripts/common.sh@337 -- # local 'op=<' 00:17:40.609 05:19:57 -- scripts/common.sh@339 -- # ver1_l=2 00:17:40.609 05:19:57 -- scripts/common.sh@340 -- # ver2_l=1 00:17:40.609 05:19:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:40.609 05:19:57 -- scripts/common.sh@343 -- # case "$op" in 00:17:40.609 05:19:57 -- scripts/common.sh@344 -- # : 1 00:17:40.609 05:19:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:40.609 05:19:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.609 05:19:57 -- scripts/common.sh@364 -- # decimal 1 00:17:40.609 05:19:57 -- scripts/common.sh@352 -- # local d=1 00:17:40.609 05:19:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.609 05:19:57 -- scripts/common.sh@354 -- # echo 1 00:17:40.609 05:19:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:40.609 05:19:57 -- scripts/common.sh@365 -- # decimal 2 00:17:40.609 05:19:57 -- scripts/common.sh@352 -- # local d=2 00:17:40.609 05:19:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.609 05:19:57 -- scripts/common.sh@354 -- # echo 2 00:17:40.609 05:19:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:40.609 05:19:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:40.609 05:19:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:40.609 05:19:57 -- scripts/common.sh@367 -- # return 0 00:17:40.609 05:19:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.609 05:19:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.609 --rc genhtml_branch_coverage=1 00:17:40.609 --rc genhtml_function_coverage=1 00:17:40.609 --rc genhtml_legend=1 00:17:40.609 --rc geninfo_all_blocks=1 00:17:40.609 --rc geninfo_unexecuted_blocks=1 00:17:40.609 00:17:40.609 ' 00:17:40.609 05:19:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.609 --rc genhtml_branch_coverage=1 00:17:40.609 --rc genhtml_function_coverage=1 00:17:40.609 --rc genhtml_legend=1 00:17:40.609 --rc geninfo_all_blocks=1 00:17:40.609 --rc geninfo_unexecuted_blocks=1 00:17:40.609 00:17:40.609 ' 00:17:40.609 05:19:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.609 --rc genhtml_branch_coverage=1 00:17:40.609 --rc genhtml_function_coverage=1 00:17:40.609 --rc genhtml_legend=1 00:17:40.609 --rc geninfo_all_blocks=1 00:17:40.609 --rc geninfo_unexecuted_blocks=1 00:17:40.609 00:17:40.609 ' 00:17:40.609 05:19:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.609 --rc genhtml_branch_coverage=1 00:17:40.609 --rc genhtml_function_coverage=1 00:17:40.609 --rc genhtml_legend=1 00:17:40.609 --rc geninfo_all_blocks=1 00:17:40.609 --rc geninfo_unexecuted_blocks=1 00:17:40.609 00:17:40.609 ' 00:17:40.609 05:19:57 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.609 05:19:57 -- nvmf/common.sh@7 -- # uname -s 00:17:40.609 05:19:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.609 05:19:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.609 05:19:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.609 05:19:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.609 05:19:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.609 05:19:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.609 05:19:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.609 05:19:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.609 05:19:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.609 05:19:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.609 05:19:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:40.609 05:19:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:40.609 05:19:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.609 05:19:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.869 05:19:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.869 05:19:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:40.869 05:19:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.869 05:19:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.869 05:19:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.870 05:19:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.870 05:19:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.870 05:19:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.870 05:19:57 -- paths/export.sh@5 -- # export PATH 00:17:40.870 05:19:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.870 05:19:57 -- nvmf/common.sh@46 -- # : 0 00:17:40.870 05:19:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.870 05:19:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.870 05:19:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.870 05:19:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.870 05:19:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.870 05:19:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.870 05:19:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.870 05:19:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.870 05:19:57 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.870 05:19:57 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.870 05:19:57 -- target/host_management.sh@104 -- # nvmftestinit 00:17:40.870 05:19:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:40.870 05:19:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.870 05:19:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.870 05:19:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.870 05:19:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.870 05:19:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.870 05:19:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.870 05:19:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.870 05:19:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:40.870 05:19:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:40.870 05:19:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:40.870 05:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:47.450 05:20:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:47.450 05:20:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:47.450 05:20:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:47.450 05:20:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:47.450 05:20:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:47.450 05:20:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:47.450 05:20:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:47.450 05:20:03 -- nvmf/common.sh@294 -- # net_devs=() 00:17:47.450 05:20:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:47.450 05:20:03 -- nvmf/common.sh@295 -- # e810=() 00:17:47.450 05:20:03 -- nvmf/common.sh@295 -- # local -ga e810 00:17:47.450 05:20:03 -- nvmf/common.sh@296 -- # x722=() 00:17:47.450 05:20:03 -- nvmf/common.sh@296 -- # local -ga x722 00:17:47.450 05:20:03 -- nvmf/common.sh@297 -- # mlx=() 00:17:47.450 05:20:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:47.450 05:20:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.450 05:20:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:47.450 05:20:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:47.450 05:20:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:47.450 05:20:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:47.450 05:20:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:47.450 05:20:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:47.450 05:20:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:47.450 05:20:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:47.450 05:20:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:47.450 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:47.450 05:20:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:47.450 05:20:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:47.450 05:20:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:47.450 05:20:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:47.450 05:20:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:47.450 05:20:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:47.450 05:20:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:47.450 05:20:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:47.450 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:47.451 05:20:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:47.451 05:20:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:47.451 05:20:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:47.451 05:20:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.451 05:20:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:47.451 05:20:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.451 05:20:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:47.451 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:47.451 05:20:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.451 05:20:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:47.451 05:20:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.451 05:20:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:47.451 05:20:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.451 05:20:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:47.451 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:47.451 05:20:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.451 05:20:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:47.451 05:20:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:47.451 05:20:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:47.451 05:20:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:47.451 05:20:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:47.451 05:20:03 -- nvmf/common.sh@57 -- # uname 00:17:47.451 05:20:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:47.451 05:20:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:47.451 05:20:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:47.451 05:20:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:47.711 05:20:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:47.711 05:20:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:47.711 05:20:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:47.711 05:20:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:47.711 05:20:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:47.711 05:20:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:47.711 05:20:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:47.711 05:20:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:47.711 05:20:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:47.711 05:20:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:47.711 05:20:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:47.711 05:20:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:47.711 05:20:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:47.711 05:20:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.711 05:20:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:47.711 05:20:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:47.711 05:20:04 -- nvmf/common.sh@104 -- # continue 2 00:17:47.711 05:20:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:47.711 05:20:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.711 05:20:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:47.711 05:20:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.711 05:20:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:47.711 05:20:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:47.711 05:20:04 -- nvmf/common.sh@104 -- # continue 2 00:17:47.711 05:20:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:47.711 05:20:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:47.711 05:20:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:47.711 05:20:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:47.711 05:20:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:47.711 05:20:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:47.711 05:20:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:47.711 05:20:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:47.711 05:20:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:47.711 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:47.711 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:47.711 altname enp217s0f0np0 00:17:47.711 altname ens818f0np0 00:17:47.711 inet 192.168.100.8/24 scope global mlx_0_0 00:17:47.711 valid_lft forever preferred_lft forever 00:17:47.711 05:20:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:47.711 05:20:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:47.711 05:20:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:47.711 05:20:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:47.711 05:20:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:47.711 05:20:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:47.711 05:20:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:47.711 05:20:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:47.711 05:20:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:47.711 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:47.711 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:47.711 altname enp217s0f1np1 00:17:47.711 altname ens818f1np1 00:17:47.711 inet 192.168.100.9/24 scope global mlx_0_1 00:17:47.711 valid_lft forever preferred_lft forever 00:17:47.711 05:20:04 -- nvmf/common.sh@410 -- # return 0 00:17:47.711 05:20:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:47.711 05:20:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:47.711 05:20:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:47.711 05:20:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:47.711 05:20:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:47.711 05:20:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:47.711 05:20:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:47.711 05:20:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:47.711 05:20:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:47.711 05:20:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:47.711 05:20:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:47.711 05:20:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.711 05:20:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:47.711 05:20:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:47.712 05:20:04 -- nvmf/common.sh@104 -- # continue 2 00:17:47.712 05:20:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:47.712 05:20:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.712 05:20:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:47.712 05:20:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.712 05:20:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:47.712 05:20:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:47.712 05:20:04 -- nvmf/common.sh@104 -- # continue 2 00:17:47.712 05:20:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:47.712 05:20:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:47.712 05:20:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:47.712 05:20:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:47.712 05:20:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:47.712 05:20:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:47.712 05:20:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:47.712 05:20:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:47.712 05:20:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:47.712 05:20:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:47.712 05:20:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:47.712 05:20:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:47.712 05:20:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:47.712 192.168.100.9' 00:17:47.712 05:20:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:47.712 192.168.100.9' 00:17:47.712 05:20:04 -- nvmf/common.sh@445 -- # head -n 1 00:17:47.712 05:20:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:47.712 05:20:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:47.712 192.168.100.9' 00:17:47.712 05:20:04 -- nvmf/common.sh@446 -- # tail -n +2 00:17:47.712 05:20:04 -- nvmf/common.sh@446 -- # head -n 1 00:17:47.712 05:20:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:47.712 05:20:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:47.712 05:20:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:47.712 05:20:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:47.712 05:20:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:47.712 05:20:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:47.712 05:20:04 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:47.712 05:20:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:47.712 05:20:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:47.712 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:17:47.712 ************************************ 00:17:47.712 START TEST nvmf_host_management 00:17:47.712 ************************************ 00:17:47.712 05:20:04 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:47.712 05:20:04 -- target/host_management.sh@69 -- # starttarget 00:17:47.712 05:20:04 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:47.712 05:20:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:47.712 05:20:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.712 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:17:47.712 05:20:04 -- nvmf/common.sh@469 -- # nvmfpid=1803169 00:17:47.712 05:20:04 -- nvmf/common.sh@470 -- # waitforlisten 1803169 00:17:47.712 05:20:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:47.712 05:20:04 -- common/autotest_common.sh@829 -- # '[' -z 1803169 ']' 00:17:47.712 05:20:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.712 05:20:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.712 05:20:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.712 05:20:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.712 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:17:47.972 [2024-11-19 05:20:04.302390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:47.972 [2024-11-19 05:20:04.302443] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.972 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.972 [2024-11-19 05:20:04.374674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.972 [2024-11-19 05:20:04.411420] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:47.972 [2024-11-19 05:20:04.411560] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.972 [2024-11-19 05:20:04.411571] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.972 [2024-11-19 05:20:04.411581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.972 [2024-11-19 05:20:04.411685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.972 [2024-11-19 05:20:04.411750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.972 [2024-11-19 05:20:04.411835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.972 [2024-11-19 05:20:04.411836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:48.909 05:20:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.909 05:20:05 -- common/autotest_common.sh@862 -- # return 0 00:17:48.909 05:20:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:48.909 05:20:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.909 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:48.909 05:20:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.909 05:20:05 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:48.909 05:20:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.909 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:48.909 [2024-11-19 05:20:05.185322] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12a04f0/0x12a49e0) succeed. 00:17:48.909 [2024-11-19 05:20:05.194509] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12a1ae0/0x12e6080) succeed. 00:17:48.909 05:20:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.909 05:20:05 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:48.909 05:20:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.909 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:48.909 05:20:05 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:48.909 05:20:05 -- target/host_management.sh@23 -- # cat 00:17:48.909 05:20:05 -- target/host_management.sh@30 -- # rpc_cmd 00:17:48.909 05:20:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.909 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:48.909 Malloc0 00:17:48.909 [2024-11-19 05:20:05.371911] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:48.910 05:20:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.910 05:20:05 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:48.910 05:20:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.910 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:48.910 05:20:05 -- target/host_management.sh@73 -- # perfpid=1803433 00:17:48.910 05:20:05 -- target/host_management.sh@74 -- # waitforlisten 1803433 /var/tmp/bdevperf.sock 00:17:48.910 05:20:05 -- common/autotest_common.sh@829 -- # '[' -z 1803433 ']' 00:17:48.910 05:20:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.910 05:20:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.910 05:20:05 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:48.910 05:20:05 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:48.910 05:20:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.910 05:20:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.910 05:20:05 -- nvmf/common.sh@520 -- # config=() 00:17:48.910 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:48.910 05:20:05 -- nvmf/common.sh@520 -- # local subsystem config 00:17:48.910 05:20:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.910 05:20:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.910 { 00:17:48.910 "params": { 00:17:48.910 "name": "Nvme$subsystem", 00:17:48.910 "trtype": "$TEST_TRANSPORT", 00:17:48.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.910 "adrfam": "ipv4", 00:17:48.910 "trsvcid": "$NVMF_PORT", 00:17:48.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.910 "hdgst": ${hdgst:-false}, 00:17:48.910 "ddgst": ${ddgst:-false} 00:17:48.910 }, 00:17:48.910 "method": "bdev_nvme_attach_controller" 00:17:48.910 } 00:17:48.910 EOF 00:17:48.910 )") 00:17:48.910 05:20:05 -- nvmf/common.sh@542 -- # cat 00:17:48.910 05:20:05 -- nvmf/common.sh@544 -- # jq . 00:17:48.910 05:20:05 -- nvmf/common.sh@545 -- # IFS=, 00:17:48.910 05:20:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:48.910 "params": { 00:17:48.910 "name": "Nvme0", 00:17:48.910 "trtype": "rdma", 00:17:48.910 "traddr": "192.168.100.8", 00:17:48.910 "adrfam": "ipv4", 00:17:48.910 "trsvcid": "4420", 00:17:48.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:48.910 "hdgst": false, 00:17:48.910 "ddgst": false 00:17:48.910 }, 00:17:48.910 "method": "bdev_nvme_attach_controller" 00:17:48.910 }' 00:17:49.169 [2024-11-19 05:20:05.474054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:49.169 [2024-11-19 05:20:05.474107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803433 ] 00:17:49.169 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.169 [2024-11-19 05:20:05.547092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.169 [2024-11-19 05:20:05.583640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.429 Running I/O for 10 seconds... 00:17:49.998 05:20:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.998 05:20:06 -- common/autotest_common.sh@862 -- # return 0 00:17:49.998 05:20:06 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:49.998 05:20:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.998 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:49.998 05:20:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.998 05:20:06 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.998 05:20:06 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:49.998 05:20:06 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:49.998 05:20:06 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:49.998 05:20:06 -- target/host_management.sh@52 -- # local ret=1 00:17:49.998 05:20:06 -- target/host_management.sh@53 -- # local i 00:17:49.998 05:20:06 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:49.998 05:20:06 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:49.998 05:20:06 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:49.998 05:20:06 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:49.998 05:20:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.998 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:49.998 05:20:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.998 05:20:06 -- target/host_management.sh@55 -- # read_io_count=3211 00:17:49.998 05:20:06 -- target/host_management.sh@58 -- # '[' 3211 -ge 100 ']' 00:17:49.998 05:20:06 -- target/host_management.sh@59 -- # ret=0 00:17:49.998 05:20:06 -- target/host_management.sh@60 -- # break 00:17:49.998 05:20:06 -- target/host_management.sh@64 -- # return 0 00:17:49.998 05:20:06 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:49.998 05:20:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.998 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:49.998 05:20:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.998 05:20:06 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:49.998 05:20:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.998 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:49.998 05:20:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.998 05:20:06 -- target/host_management.sh@87 -- # sleep 1 00:17:50.938 [2024-11-19 05:20:07.375896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.375933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.375952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.375963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.375974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:17:50.938 [2024-11-19 05:20:07.375983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.375995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182500 00:17:50.938 [2024-11-19 05:20:07.376004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:17:50.938 [2024-11-19 05:20:07.376023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:50.938 [2024-11-19 05:20:07.376063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:17:50.938 [2024-11-19 05:20:07.376083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:17:50.938 [2024-11-19 05:20:07.376107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.376185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:50.938 [2024-11-19 05:20:07.376204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.376223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.376282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.376302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.376321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:17:50.938 [2024-11-19 05:20:07.376343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:17:50.938 [2024-11-19 05:20:07.376363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.376383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:50.938 [2024-11-19 05:20:07.376441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:17:50.938 [2024-11-19 05:20:07.376460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:50.938 [2024-11-19 05:20:07.376480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.938 [2024-11-19 05:20:07.376490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:17:50.938 [2024-11-19 05:20:07.376499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:17:50.939 [2024-11-19 05:20:07.376518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:50.939 [2024-11-19 05:20:07.376543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:17:50.939 [2024-11-19 05:20:07.376564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:50.939 [2024-11-19 05:20:07.376583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182000 00:17:50.939 [2024-11-19 05:20:07.376606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:17:50.939 [2024-11-19 05:20:07.376625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:17:50.939 [2024-11-19 05:20:07.376645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:17:50.939 [2024-11-19 05:20:07.376664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:17:50.939 [2024-11-19 05:20:07.376683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:17:50.939 [2024-11-19 05:20:07.376702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:17:50.939 [2024-11-19 05:20:07.376721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:17:50.939 [2024-11-19 05:20:07.376740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0a2000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.376985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.376995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c525000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.377180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c504000 len:0x10000 key:0x182300 00:17:50.939 [2024-11-19 05:20:07.377189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e4986000 sqhd:5310 p:0 m:0 dnr:0 00:17:50.939 [2024-11-19 05:20:07.379078] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:50.939 [2024-11-19 05:20:07.379997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:50.940 task offset: 48256 on job bdev=Nvme0n1 fails 00:17:50.940 00:17:50.940 Latency(us) 00:17:50.940 [2024-11-19T04:20:07.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.940 [2024-11-19T04:20:07.498Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:50.940 [2024-11-19T04:20:07.498Z] Job: Nvme0n1 ended in about 1.62 seconds with error 00:17:50.940 Verification LBA range: start 0x0 length 0x400 00:17:50.940 Nvme0n1 : 1.62 2111.49 131.97 39.44 0.00 29568.37 3171.94 1013343.85 00:17:50.940 [2024-11-19T04:20:07.498Z] =================================================================================================================== 00:17:50.940 [2024-11-19T04:20:07.498Z] Total : 2111.49 131.97 39.44 0.00 29568.37 3171.94 1013343.85 00:17:50.940 [2024-11-19 05:20:07.381597] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.940 05:20:07 -- target/host_management.sh@91 -- # kill -9 1803433 00:17:50.940 05:20:07 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:50.940 05:20:07 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:50.940 05:20:07 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:50.940 05:20:07 -- nvmf/common.sh@520 -- # config=() 00:17:50.940 05:20:07 -- nvmf/common.sh@520 -- # local subsystem config 00:17:50.940 05:20:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:50.940 05:20:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:50.940 { 00:17:50.940 "params": { 00:17:50.940 "name": "Nvme$subsystem", 00:17:50.940 "trtype": "$TEST_TRANSPORT", 00:17:50.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.940 "adrfam": "ipv4", 00:17:50.940 "trsvcid": "$NVMF_PORT", 00:17:50.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.940 "hdgst": ${hdgst:-false}, 00:17:50.940 "ddgst": ${ddgst:-false} 00:17:50.940 }, 00:17:50.940 "method": "bdev_nvme_attach_controller" 00:17:50.940 } 00:17:50.940 EOF 00:17:50.940 )") 00:17:50.940 05:20:07 -- nvmf/common.sh@542 -- # cat 00:17:50.940 05:20:07 -- nvmf/common.sh@544 -- # jq . 00:17:50.940 05:20:07 -- nvmf/common.sh@545 -- # IFS=, 00:17:50.940 05:20:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:50.940 "params": { 00:17:50.940 "name": "Nvme0", 00:17:50.940 "trtype": "rdma", 00:17:50.940 "traddr": "192.168.100.8", 00:17:50.940 "adrfam": "ipv4", 00:17:50.940 "trsvcid": "4420", 00:17:50.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:50.940 "hdgst": false, 00:17:50.940 "ddgst": false 00:17:50.940 }, 00:17:50.940 "method": "bdev_nvme_attach_controller" 00:17:50.940 }' 00:17:50.940 [2024-11-19 05:20:07.433847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:50.940 [2024-11-19 05:20:07.433893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803757 ] 00:17:50.940 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.200 [2024-11-19 05:20:07.504528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.200 [2024-11-19 05:20:07.541144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.200 Running I/O for 1 seconds... 00:17:52.579 00:17:52.579 Latency(us) 00:17:52.579 [2024-11-19T04:20:09.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.579 [2024-11-19T04:20:09.137Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:52.579 Verification LBA range: start 0x0 length 0x400 00:17:52.579 Nvme0n1 : 1.01 5603.52 350.22 0.00 0.00 11249.01 507.90 24746.39 00:17:52.579 [2024-11-19T04:20:09.137Z] =================================================================================================================== 00:17:52.579 [2024-11-19T04:20:09.137Z] Total : 5603.52 350.22 0.00 0.00 11249.01 507.90 24746.39 00:17:52.579 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1803433 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:52.579 05:20:08 -- target/host_management.sh@101 -- # stoptarget 00:17:52.579 05:20:08 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:52.579 05:20:08 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:52.579 05:20:08 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:52.579 05:20:08 -- target/host_management.sh@40 -- # nvmftestfini 00:17:52.579 05:20:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:52.579 05:20:08 -- nvmf/common.sh@116 -- # sync 00:17:52.579 05:20:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:52.579 05:20:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:52.579 05:20:08 -- nvmf/common.sh@119 -- # set +e 00:17:52.579 05:20:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:52.579 05:20:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:52.579 rmmod nvme_rdma 00:17:52.579 rmmod nvme_fabrics 00:17:52.579 05:20:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:52.579 05:20:08 -- nvmf/common.sh@123 -- # set -e 00:17:52.579 05:20:08 -- nvmf/common.sh@124 -- # return 0 00:17:52.579 05:20:08 -- nvmf/common.sh@477 -- # '[' -n 1803169 ']' 00:17:52.579 05:20:08 -- nvmf/common.sh@478 -- # killprocess 1803169 00:17:52.579 05:20:08 -- common/autotest_common.sh@936 -- # '[' -z 1803169 ']' 00:17:52.579 05:20:08 -- common/autotest_common.sh@940 -- # kill -0 1803169 00:17:52.579 05:20:08 -- common/autotest_common.sh@941 -- # uname 00:17:52.579 05:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.579 05:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1803169 00:17:52.579 05:20:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:52.579 05:20:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:52.579 05:20:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1803169' 00:17:52.579 killing process with pid 1803169 00:17:52.579 05:20:09 -- common/autotest_common.sh@955 -- # kill 1803169 00:17:52.579 05:20:09 -- common/autotest_common.sh@960 -- # wait 1803169 00:17:52.839 [2024-11-19 05:20:09.293224] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:52.839 05:20:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:52.839 05:20:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:52.839 00:17:52.839 real 0m5.068s 00:17:52.839 user 0m22.860s 00:17:52.839 sys 0m0.982s 00:17:52.839 05:20:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.839 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:17:52.839 ************************************ 00:17:52.839 END TEST nvmf_host_management 00:17:52.839 ************************************ 00:17:52.839 05:20:09 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:52.839 00:17:52.839 real 0m12.398s 00:17:52.839 user 0m25.018s 00:17:52.839 sys 0m6.400s 00:17:52.839 05:20:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.839 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:17:52.839 ************************************ 00:17:52.839 END TEST nvmf_host_management 00:17:52.839 ************************************ 00:17:53.099 05:20:09 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:53.099 05:20:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:53.099 05:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.099 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:17:53.099 ************************************ 00:17:53.099 START TEST nvmf_lvol 00:17:53.099 ************************************ 00:17:53.099 05:20:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:53.099 * Looking for test storage... 00:17:53.099 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:53.099 05:20:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:53.099 05:20:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:53.099 05:20:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:53.099 05:20:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:53.099 05:20:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:53.099 05:20:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.099 05:20:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.099 05:20:09 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.099 05:20:09 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.099 05:20:09 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.099 05:20:09 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.099 05:20:09 -- scripts/common.sh@337 -- # local 'op=<' 00:17:53.099 05:20:09 -- scripts/common.sh@339 -- # ver1_l=2 00:17:53.099 05:20:09 -- scripts/common.sh@340 -- # ver2_l=1 00:17:53.099 05:20:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.099 05:20:09 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.099 05:20:09 -- scripts/common.sh@344 -- # : 1 00:17:53.099 05:20:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.099 05:20:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.099 05:20:09 -- scripts/common.sh@364 -- # decimal 1 00:17:53.099 05:20:09 -- scripts/common.sh@352 -- # local d=1 00:17:53.099 05:20:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.099 05:20:09 -- scripts/common.sh@354 -- # echo 1 00:17:53.099 05:20:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:53.099 05:20:09 -- scripts/common.sh@365 -- # decimal 2 00:17:53.099 05:20:09 -- scripts/common.sh@352 -- # local d=2 00:17:53.099 05:20:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.099 05:20:09 -- scripts/common.sh@354 -- # echo 2 00:17:53.099 05:20:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:53.099 05:20:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.099 05:20:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.099 05:20:09 -- scripts/common.sh@367 -- # return 0 00:17:53.099 05:20:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.099 05:20:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:53.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.099 --rc genhtml_branch_coverage=1 00:17:53.099 --rc genhtml_function_coverage=1 00:17:53.099 --rc genhtml_legend=1 00:17:53.099 --rc geninfo_all_blocks=1 00:17:53.099 --rc geninfo_unexecuted_blocks=1 00:17:53.099 00:17:53.099 ' 00:17:53.099 05:20:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:53.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.099 --rc genhtml_branch_coverage=1 00:17:53.099 --rc genhtml_function_coverage=1 00:17:53.099 --rc genhtml_legend=1 00:17:53.099 --rc geninfo_all_blocks=1 00:17:53.099 --rc geninfo_unexecuted_blocks=1 00:17:53.099 00:17:53.099 ' 00:17:53.099 05:20:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:53.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.099 --rc genhtml_branch_coverage=1 00:17:53.099 --rc genhtml_function_coverage=1 00:17:53.099 --rc genhtml_legend=1 00:17:53.099 --rc geninfo_all_blocks=1 00:17:53.099 --rc geninfo_unexecuted_blocks=1 00:17:53.099 00:17:53.099 ' 00:17:53.099 05:20:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:53.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.099 --rc genhtml_branch_coverage=1 00:17:53.099 --rc genhtml_function_coverage=1 00:17:53.099 --rc genhtml_legend=1 00:17:53.099 --rc geninfo_all_blocks=1 00:17:53.099 --rc geninfo_unexecuted_blocks=1 00:17:53.099 00:17:53.099 ' 00:17:53.099 05:20:09 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.099 05:20:09 -- nvmf/common.sh@7 -- # uname -s 00:17:53.100 05:20:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.100 05:20:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.100 05:20:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.100 05:20:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.100 05:20:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.100 05:20:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.100 05:20:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.100 05:20:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.100 05:20:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.100 05:20:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.100 05:20:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:53.100 05:20:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:53.100 05:20:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.100 05:20:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.100 05:20:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.100 05:20:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:53.100 05:20:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.100 05:20:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.100 05:20:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.100 05:20:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.100 05:20:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.100 05:20:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.100 05:20:09 -- paths/export.sh@5 -- # export PATH 00:17:53.100 05:20:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.100 05:20:09 -- nvmf/common.sh@46 -- # : 0 00:17:53.100 05:20:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.100 05:20:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.100 05:20:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.100 05:20:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.100 05:20:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.100 05:20:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.100 05:20:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.100 05:20:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.100 05:20:09 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:53.100 05:20:09 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:53.100 05:20:09 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:53.100 05:20:09 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:53.100 05:20:09 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:53.100 05:20:09 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:53.100 05:20:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:53.100 05:20:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.100 05:20:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.100 05:20:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.100 05:20:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.100 05:20:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.100 05:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.100 05:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.100 05:20:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:53.100 05:20:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:53.100 05:20:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:53.100 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:17:59.779 05:20:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:59.779 05:20:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:59.779 05:20:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:59.779 05:20:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:59.779 05:20:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:59.779 05:20:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:59.779 05:20:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:59.779 05:20:15 -- nvmf/common.sh@294 -- # net_devs=() 00:17:59.779 05:20:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:59.779 05:20:15 -- nvmf/common.sh@295 -- # e810=() 00:17:59.779 05:20:15 -- nvmf/common.sh@295 -- # local -ga e810 00:17:59.779 05:20:15 -- nvmf/common.sh@296 -- # x722=() 00:17:59.779 05:20:15 -- nvmf/common.sh@296 -- # local -ga x722 00:17:59.779 05:20:15 -- nvmf/common.sh@297 -- # mlx=() 00:17:59.779 05:20:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:59.779 05:20:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.779 05:20:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:59.779 05:20:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:59.779 05:20:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:59.779 05:20:15 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:59.779 05:20:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:59.779 05:20:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.779 05:20:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:59.779 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:59.779 05:20:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.779 05:20:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.779 05:20:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:59.779 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:59.779 05:20:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.779 05:20:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:59.779 05:20:15 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.779 05:20:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.779 05:20:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.779 05:20:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.779 05:20:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:59.779 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:59.779 05:20:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.779 05:20:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.779 05:20:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.779 05:20:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.779 05:20:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.779 05:20:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:59.779 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:59.779 05:20:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.779 05:20:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:59.779 05:20:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:59.779 05:20:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:59.779 05:20:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:59.779 05:20:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:59.779 05:20:15 -- nvmf/common.sh@57 -- # uname 00:17:59.779 05:20:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:59.779 05:20:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:59.779 05:20:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:59.779 05:20:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:59.779 05:20:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:59.779 05:20:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:59.779 05:20:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:59.779 05:20:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:59.779 05:20:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:59.779 05:20:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:59.779 05:20:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:59.779 05:20:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.779 05:20:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:59.779 05:20:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:59.779 05:20:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.779 05:20:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:59.779 05:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:59.779 05:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.779 05:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.779 05:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:59.779 05:20:16 -- nvmf/common.sh@104 -- # continue 2 00:17:59.779 05:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:59.779 05:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.779 05:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.779 05:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.779 05:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.779 05:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:59.779 05:20:16 -- nvmf/common.sh@104 -- # continue 2 00:17:59.779 05:20:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:59.779 05:20:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:59.779 05:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:59.779 05:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:59.779 05:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:59.779 05:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:59.779 05:20:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:59.779 05:20:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:59.779 05:20:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:59.779 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.779 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:59.779 altname enp217s0f0np0 00:17:59.780 altname ens818f0np0 00:17:59.780 inet 192.168.100.8/24 scope global mlx_0_0 00:17:59.780 valid_lft forever preferred_lft forever 00:17:59.780 05:20:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:59.780 05:20:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:59.780 05:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:59.780 05:20:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:59.780 05:20:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:59.780 05:20:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:59.780 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.780 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:59.780 altname enp217s0f1np1 00:17:59.780 altname ens818f1np1 00:17:59.780 inet 192.168.100.9/24 scope global mlx_0_1 00:17:59.780 valid_lft forever preferred_lft forever 00:17:59.780 05:20:16 -- nvmf/common.sh@410 -- # return 0 00:17:59.780 05:20:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:59.780 05:20:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:59.780 05:20:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:59.780 05:20:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:59.780 05:20:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:59.780 05:20:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.780 05:20:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:59.780 05:20:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:59.780 05:20:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.780 05:20:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:59.780 05:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:59.780 05:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.780 05:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.780 05:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:59.780 05:20:16 -- nvmf/common.sh@104 -- # continue 2 00:17:59.780 05:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:59.780 05:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.780 05:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.780 05:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.780 05:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.780 05:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:59.780 05:20:16 -- nvmf/common.sh@104 -- # continue 2 00:17:59.780 05:20:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:59.780 05:20:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:59.780 05:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:59.780 05:20:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:59.780 05:20:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:59.780 05:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:59.780 05:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:59.780 05:20:16 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:59.780 192.168.100.9' 00:17:59.780 05:20:16 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:59.780 192.168.100.9' 00:17:59.780 05:20:16 -- nvmf/common.sh@445 -- # head -n 1 00:17:59.780 05:20:16 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:59.780 05:20:16 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:59.780 192.168.100.9' 00:17:59.780 05:20:16 -- nvmf/common.sh@446 -- # tail -n +2 00:17:59.780 05:20:16 -- nvmf/common.sh@446 -- # head -n 1 00:17:59.780 05:20:16 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:59.780 05:20:16 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:59.780 05:20:16 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:59.780 05:20:16 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:59.780 05:20:16 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:59.780 05:20:16 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:59.780 05:20:16 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:59.780 05:20:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:59.780 05:20:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.780 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.780 05:20:16 -- nvmf/common.sh@469 -- # nvmfpid=1807459 00:17:59.780 05:20:16 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:59.780 05:20:16 -- nvmf/common.sh@470 -- # waitforlisten 1807459 00:17:59.780 05:20:16 -- common/autotest_common.sh@829 -- # '[' -z 1807459 ']' 00:17:59.780 05:20:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.780 05:20:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.780 05:20:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.780 05:20:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.780 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.780 [2024-11-19 05:20:16.221158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:59.780 [2024-11-19 05:20:16.221208] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.780 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.780 [2024-11-19 05:20:16.290893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.780 [2024-11-19 05:20:16.327973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:59.780 [2024-11-19 05:20:16.328087] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.780 [2024-11-19 05:20:16.328097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.780 [2024-11-19 05:20:16.328106] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.780 [2024-11-19 05:20:16.328152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.780 [2024-11-19 05:20:16.328175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.780 [2024-11-19 05:20:16.328177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.718 05:20:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.718 05:20:17 -- common/autotest_common.sh@862 -- # return 0 00:18:00.718 05:20:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:00.718 05:20:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:00.718 05:20:17 -- common/autotest_common.sh@10 -- # set +x 00:18:00.719 05:20:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.719 05:20:17 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:00.719 [2024-11-19 05:20:17.272162] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7386c0/0x73cbb0) succeed. 00:18:00.977 [2024-11-19 05:20:17.281262] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x739c10/0x77e250) succeed. 00:18:00.977 05:20:17 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.236 05:20:17 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:01.236 05:20:17 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.236 05:20:17 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:01.236 05:20:17 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:01.495 05:20:17 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:01.754 05:20:18 -- target/nvmf_lvol.sh@29 -- # lvs=43ffb002-154b-4dfd-bc36-2f9ead44dad2 00:18:01.754 05:20:18 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 43ffb002-154b-4dfd-bc36-2f9ead44dad2 lvol 20 00:18:02.013 05:20:18 -- target/nvmf_lvol.sh@32 -- # lvol=5f1f40df-1c25-4d5e-a293-5c0a43a17557 00:18:02.013 05:20:18 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:02.013 05:20:18 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f1f40df-1c25-4d5e-a293-5c0a43a17557 00:18:02.272 05:20:18 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:02.532 [2024-11-19 05:20:18.884755] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:02.532 05:20:18 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:02.791 05:20:19 -- target/nvmf_lvol.sh@42 -- # perf_pid=1808031 00:18:02.791 05:20:19 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:02.791 05:20:19 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:02.791 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.729 05:20:20 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5f1f40df-1c25-4d5e-a293-5c0a43a17557 MY_SNAPSHOT 00:18:03.987 05:20:20 -- target/nvmf_lvol.sh@47 -- # snapshot=1b71fc5f-81eb-4e7c-82dc-ed901ba062e4 00:18:03.987 05:20:20 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5f1f40df-1c25-4d5e-a293-5c0a43a17557 30 00:18:03.987 05:20:20 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1b71fc5f-81eb-4e7c-82dc-ed901ba062e4 MY_CLONE 00:18:04.246 05:20:20 -- target/nvmf_lvol.sh@49 -- # clone=1c69fab5-efad-4867-a370-b5b7b21989ab 00:18:04.246 05:20:20 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1c69fab5-efad-4867-a370-b5b7b21989ab 00:18:04.506 05:20:20 -- target/nvmf_lvol.sh@53 -- # wait 1808031 00:18:14.492 Initializing NVMe Controllers 00:18:14.492 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:14.492 Controller IO queue size 128, less than required. 00:18:14.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:14.492 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:14.492 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:14.492 Initialization complete. Launching workers. 00:18:14.492 ======================================================== 00:18:14.492 Latency(us) 00:18:14.492 Device Information : IOPS MiB/s Average min max 00:18:14.492 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16821.50 65.71 7611.32 2036.39 44058.49 00:18:14.492 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16758.90 65.46 7639.73 3289.32 37660.30 00:18:14.492 ======================================================== 00:18:14.492 Total : 33580.40 131.17 7625.50 2036.39 44058.49 00:18:14.492 00:18:14.492 05:20:30 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:14.492 05:20:30 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f1f40df-1c25-4d5e-a293-5c0a43a17557 00:18:14.492 05:20:30 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43ffb002-154b-4dfd-bc36-2f9ead44dad2 00:18:14.752 05:20:31 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:14.752 05:20:31 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:14.752 05:20:31 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:14.752 05:20:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:14.752 05:20:31 -- nvmf/common.sh@116 -- # sync 00:18:14.752 05:20:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:14.752 05:20:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:14.752 05:20:31 -- nvmf/common.sh@119 -- # set +e 00:18:14.752 05:20:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:14.752 05:20:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:14.752 rmmod nvme_rdma 00:18:14.752 rmmod nvme_fabrics 00:18:14.752 05:20:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:14.752 05:20:31 -- nvmf/common.sh@123 -- # set -e 00:18:14.752 05:20:31 -- nvmf/common.sh@124 -- # return 0 00:18:14.752 05:20:31 -- nvmf/common.sh@477 -- # '[' -n 1807459 ']' 00:18:14.752 05:20:31 -- nvmf/common.sh@478 -- # killprocess 1807459 00:18:14.752 05:20:31 -- common/autotest_common.sh@936 -- # '[' -z 1807459 ']' 00:18:14.752 05:20:31 -- common/autotest_common.sh@940 -- # kill -0 1807459 00:18:14.752 05:20:31 -- common/autotest_common.sh@941 -- # uname 00:18:14.752 05:20:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.752 05:20:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1807459 00:18:14.752 05:20:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:14.752 05:20:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:14.752 05:20:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1807459' 00:18:14.752 killing process with pid 1807459 00:18:14.752 05:20:31 -- common/autotest_common.sh@955 -- # kill 1807459 00:18:14.752 05:20:31 -- common/autotest_common.sh@960 -- # wait 1807459 00:18:15.012 05:20:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.012 05:20:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:15.012 00:18:15.012 real 0m22.057s 00:18:15.012 user 1m11.761s 00:18:15.012 sys 0m6.260s 00:18:15.012 05:20:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:15.012 05:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:15.012 ************************************ 00:18:15.012 END TEST nvmf_lvol 00:18:15.012 ************************************ 00:18:15.012 05:20:31 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:15.012 05:20:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:15.012 05:20:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.012 05:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:15.012 ************************************ 00:18:15.012 START TEST nvmf_lvs_grow 00:18:15.012 ************************************ 00:18:15.012 05:20:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:15.272 * Looking for test storage... 00:18:15.272 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:15.272 05:20:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:15.272 05:20:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:15.272 05:20:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:15.272 05:20:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:15.272 05:20:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:15.272 05:20:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:15.272 05:20:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:15.272 05:20:31 -- scripts/common.sh@335 -- # IFS=.-: 00:18:15.272 05:20:31 -- scripts/common.sh@335 -- # read -ra ver1 00:18:15.272 05:20:31 -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.272 05:20:31 -- scripts/common.sh@336 -- # read -ra ver2 00:18:15.272 05:20:31 -- scripts/common.sh@337 -- # local 'op=<' 00:18:15.272 05:20:31 -- scripts/common.sh@339 -- # ver1_l=2 00:18:15.272 05:20:31 -- scripts/common.sh@340 -- # ver2_l=1 00:18:15.272 05:20:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:15.272 05:20:31 -- scripts/common.sh@343 -- # case "$op" in 00:18:15.272 05:20:31 -- scripts/common.sh@344 -- # : 1 00:18:15.272 05:20:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:15.272 05:20:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.272 05:20:31 -- scripts/common.sh@364 -- # decimal 1 00:18:15.272 05:20:31 -- scripts/common.sh@352 -- # local d=1 00:18:15.272 05:20:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.272 05:20:31 -- scripts/common.sh@354 -- # echo 1 00:18:15.272 05:20:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:15.272 05:20:31 -- scripts/common.sh@365 -- # decimal 2 00:18:15.272 05:20:31 -- scripts/common.sh@352 -- # local d=2 00:18:15.272 05:20:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.272 05:20:31 -- scripts/common.sh@354 -- # echo 2 00:18:15.272 05:20:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:15.272 05:20:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:15.272 05:20:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:15.272 05:20:31 -- scripts/common.sh@367 -- # return 0 00:18:15.272 05:20:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.272 05:20:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.272 --rc genhtml_branch_coverage=1 00:18:15.272 --rc genhtml_function_coverage=1 00:18:15.272 --rc genhtml_legend=1 00:18:15.272 --rc geninfo_all_blocks=1 00:18:15.272 --rc geninfo_unexecuted_blocks=1 00:18:15.272 00:18:15.272 ' 00:18:15.272 05:20:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.272 --rc genhtml_branch_coverage=1 00:18:15.272 --rc genhtml_function_coverage=1 00:18:15.272 --rc genhtml_legend=1 00:18:15.272 --rc geninfo_all_blocks=1 00:18:15.272 --rc geninfo_unexecuted_blocks=1 00:18:15.272 00:18:15.272 ' 00:18:15.272 05:20:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.272 --rc genhtml_branch_coverage=1 00:18:15.272 --rc genhtml_function_coverage=1 00:18:15.272 --rc genhtml_legend=1 00:18:15.272 --rc geninfo_all_blocks=1 00:18:15.272 --rc geninfo_unexecuted_blocks=1 00:18:15.272 00:18:15.272 ' 00:18:15.272 05:20:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.272 --rc genhtml_branch_coverage=1 00:18:15.272 --rc genhtml_function_coverage=1 00:18:15.272 --rc genhtml_legend=1 00:18:15.273 --rc geninfo_all_blocks=1 00:18:15.273 --rc geninfo_unexecuted_blocks=1 00:18:15.273 00:18:15.273 ' 00:18:15.273 05:20:31 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.273 05:20:31 -- nvmf/common.sh@7 -- # uname -s 00:18:15.273 05:20:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.273 05:20:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.273 05:20:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.273 05:20:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.273 05:20:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.273 05:20:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.273 05:20:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.273 05:20:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.273 05:20:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.273 05:20:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.273 05:20:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:15.273 05:20:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:15.273 05:20:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.273 05:20:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.273 05:20:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.273 05:20:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:15.273 05:20:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.273 05:20:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.273 05:20:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.273 05:20:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.273 05:20:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.273 05:20:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.273 05:20:31 -- paths/export.sh@5 -- # export PATH 00:18:15.273 05:20:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.273 05:20:31 -- nvmf/common.sh@46 -- # : 0 00:18:15.273 05:20:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:15.273 05:20:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:15.273 05:20:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:15.273 05:20:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.273 05:20:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.273 05:20:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:15.273 05:20:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:15.273 05:20:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:15.273 05:20:31 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:15.273 05:20:31 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.273 05:20:31 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:15.273 05:20:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:15.273 05:20:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.273 05:20:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:15.273 05:20:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:15.273 05:20:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:15.273 05:20:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.273 05:20:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.273 05:20:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.273 05:20:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:15.273 05:20:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:15.273 05:20:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:15.273 05:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:21.848 05:20:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:21.848 05:20:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:21.848 05:20:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:21.848 05:20:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:21.848 05:20:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:21.848 05:20:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:21.848 05:20:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:21.848 05:20:38 -- nvmf/common.sh@294 -- # net_devs=() 00:18:21.848 05:20:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:21.848 05:20:38 -- nvmf/common.sh@295 -- # e810=() 00:18:21.848 05:20:38 -- nvmf/common.sh@295 -- # local -ga e810 00:18:21.848 05:20:38 -- nvmf/common.sh@296 -- # x722=() 00:18:21.848 05:20:38 -- nvmf/common.sh@296 -- # local -ga x722 00:18:21.848 05:20:38 -- nvmf/common.sh@297 -- # mlx=() 00:18:21.848 05:20:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:21.848 05:20:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.848 05:20:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:21.848 05:20:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:21.848 05:20:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:21.848 05:20:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:21.848 05:20:38 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:21.848 05:20:38 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:21.848 05:20:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:21.848 05:20:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:21.849 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:21.849 05:20:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:21.849 05:20:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:21.849 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:21.849 05:20:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:21.849 05:20:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:21.849 05:20:38 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.849 05:20:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.849 05:20:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.849 05:20:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:21.849 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:21.849 05:20:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.849 05:20:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.849 05:20:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.849 05:20:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.849 05:20:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:21.849 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:21.849 05:20:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.849 05:20:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:21.849 05:20:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:21.849 05:20:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:21.849 05:20:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:21.849 05:20:38 -- nvmf/common.sh@57 -- # uname 00:18:21.849 05:20:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:21.849 05:20:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:21.849 05:20:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:21.849 05:20:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:21.849 05:20:38 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:21.849 05:20:38 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:21.849 05:20:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:21.849 05:20:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:21.849 05:20:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:21.849 05:20:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:21.849 05:20:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:21.849 05:20:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:21.849 05:20:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:21.849 05:20:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:21.849 05:20:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:21.849 05:20:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:21.849 05:20:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:21.849 05:20:38 -- nvmf/common.sh@104 -- # continue 2 00:18:21.849 05:20:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.849 05:20:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:21.849 05:20:38 -- nvmf/common.sh@104 -- # continue 2 00:18:21.849 05:20:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:21.849 05:20:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:21.849 05:20:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:21.849 05:20:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:21.849 05:20:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:21.849 05:20:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:21.849 05:20:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:21.849 05:20:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:21.849 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:21.849 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:21.849 altname enp217s0f0np0 00:18:21.849 altname ens818f0np0 00:18:21.849 inet 192.168.100.8/24 scope global mlx_0_0 00:18:21.849 valid_lft forever preferred_lft forever 00:18:21.849 05:20:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:21.849 05:20:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:21.849 05:20:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:21.849 05:20:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:21.849 05:20:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:21.849 05:20:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:21.849 05:20:38 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:21.849 05:20:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:21.849 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:21.849 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:21.849 altname enp217s0f1np1 00:18:21.849 altname ens818f1np1 00:18:21.849 inet 192.168.100.9/24 scope global mlx_0_1 00:18:21.849 valid_lft forever preferred_lft forever 00:18:21.849 05:20:38 -- nvmf/common.sh@410 -- # return 0 00:18:21.849 05:20:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:21.849 05:20:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:21.849 05:20:38 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:21.849 05:20:38 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:21.849 05:20:38 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:21.849 05:20:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:21.849 05:20:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:21.849 05:20:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:21.849 05:20:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:22.109 05:20:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:22.109 05:20:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:22.109 05:20:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:22.109 05:20:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:22.109 05:20:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:22.109 05:20:38 -- nvmf/common.sh@104 -- # continue 2 00:18:22.109 05:20:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:22.109 05:20:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:22.109 05:20:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:22.109 05:20:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:22.109 05:20:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:22.109 05:20:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:22.109 05:20:38 -- nvmf/common.sh@104 -- # continue 2 00:18:22.109 05:20:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:22.109 05:20:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:22.109 05:20:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:22.109 05:20:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:22.109 05:20:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:22.109 05:20:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:22.109 05:20:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:22.109 05:20:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:22.109 05:20:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:22.109 05:20:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:22.109 05:20:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:22.109 05:20:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:22.109 05:20:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:22.109 192.168.100.9' 00:18:22.109 05:20:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:22.109 192.168.100.9' 00:18:22.109 05:20:38 -- nvmf/common.sh@445 -- # head -n 1 00:18:22.109 05:20:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:22.109 05:20:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:22.109 192.168.100.9' 00:18:22.109 05:20:38 -- nvmf/common.sh@446 -- # tail -n +2 00:18:22.109 05:20:38 -- nvmf/common.sh@446 -- # head -n 1 00:18:22.109 05:20:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:22.109 05:20:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:22.109 05:20:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:22.109 05:20:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:22.109 05:20:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:22.109 05:20:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:22.109 05:20:38 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:22.109 05:20:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:22.109 05:20:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.109 05:20:38 -- common/autotest_common.sh@10 -- # set +x 00:18:22.109 05:20:38 -- nvmf/common.sh@469 -- # nvmfpid=1813377 00:18:22.109 05:20:38 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:22.109 05:20:38 -- nvmf/common.sh@470 -- # waitforlisten 1813377 00:18:22.109 05:20:38 -- common/autotest_common.sh@829 -- # '[' -z 1813377 ']' 00:18:22.109 05:20:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.109 05:20:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.109 05:20:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.109 05:20:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.109 05:20:38 -- common/autotest_common.sh@10 -- # set +x 00:18:22.109 [2024-11-19 05:20:38.548253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:22.109 [2024-11-19 05:20:38.548299] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.109 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.109 [2024-11-19 05:20:38.618192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.109 [2024-11-19 05:20:38.654550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:22.109 [2024-11-19 05:20:38.654682] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.109 [2024-11-19 05:20:38.654692] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.109 [2024-11-19 05:20:38.654701] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.109 [2024-11-19 05:20:38.654726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.047 05:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.047 05:20:39 -- common/autotest_common.sh@862 -- # return 0 00:18:23.047 05:20:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:23.047 05:20:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:23.047 05:20:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.047 05:20:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.047 05:20:39 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:23.047 [2024-11-19 05:20:39.593703] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2342080/0x2346570) succeed. 00:18:23.047 [2024-11-19 05:20:39.603224] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2343580/0x2387c10) succeed. 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:23.307 05:20:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:23.307 05:20:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:23.307 05:20:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.307 ************************************ 00:18:23.307 START TEST lvs_grow_clean 00:18:23.307 ************************************ 00:18:23.307 05:20:39 -- common/autotest_common.sh@1114 -- # lvs_grow 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:23.307 05:20:39 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:23.308 05:20:39 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:23.308 05:20:39 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:23.567 05:20:39 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:23.567 05:20:39 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:23.567 05:20:40 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:23.567 05:20:40 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:23.567 05:20:40 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:23.827 05:20:40 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:23.827 05:20:40 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:23.827 05:20:40 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 lvol 150 00:18:24.086 05:20:40 -- target/nvmf_lvs_grow.sh@33 -- # lvol=411fb9f7-5bd4-4d82-ba15-26b6eca2352a 00:18:24.086 05:20:40 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:24.086 05:20:40 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:24.086 [2024-11-19 05:20:40.576350] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:24.086 [2024-11-19 05:20:40.576405] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:24.086 true 00:18:24.086 05:20:40 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:24.086 05:20:40 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:24.345 05:20:40 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:24.345 05:20:40 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:24.605 05:20:40 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 411fb9f7-5bd4-4d82-ba15-26b6eca2352a 00:18:24.605 05:20:41 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:24.864 [2024-11-19 05:20:41.258637] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:24.864 05:20:41 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:25.123 05:20:41 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1813947 00:18:25.123 05:20:41 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:25.123 05:20:41 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.123 05:20:41 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1813947 /var/tmp/bdevperf.sock 00:18:25.123 05:20:41 -- common/autotest_common.sh@829 -- # '[' -z 1813947 ']' 00:18:25.123 05:20:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.123 05:20:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.123 05:20:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.123 05:20:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.123 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:18:25.123 [2024-11-19 05:20:41.477360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:25.123 [2024-11-19 05:20:41.477410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813947 ] 00:18:25.123 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.123 [2024-11-19 05:20:41.546579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.123 [2024-11-19 05:20:41.582549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.061 05:20:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.061 05:20:42 -- common/autotest_common.sh@862 -- # return 0 00:18:26.061 05:20:42 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:26.061 Nvme0n1 00:18:26.061 05:20:42 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:26.320 [ 00:18:26.320 { 00:18:26.320 "name": "Nvme0n1", 00:18:26.320 "aliases": [ 00:18:26.320 "411fb9f7-5bd4-4d82-ba15-26b6eca2352a" 00:18:26.320 ], 00:18:26.320 "product_name": "NVMe disk", 00:18:26.320 "block_size": 4096, 00:18:26.320 "num_blocks": 38912, 00:18:26.320 "uuid": "411fb9f7-5bd4-4d82-ba15-26b6eca2352a", 00:18:26.320 "assigned_rate_limits": { 00:18:26.320 "rw_ios_per_sec": 0, 00:18:26.320 "rw_mbytes_per_sec": 0, 00:18:26.320 "r_mbytes_per_sec": 0, 00:18:26.320 "w_mbytes_per_sec": 0 00:18:26.320 }, 00:18:26.320 "claimed": false, 00:18:26.320 "zoned": false, 00:18:26.320 "supported_io_types": { 00:18:26.320 "read": true, 00:18:26.320 "write": true, 00:18:26.320 "unmap": true, 00:18:26.320 "write_zeroes": true, 00:18:26.320 "flush": true, 00:18:26.320 "reset": true, 00:18:26.320 "compare": true, 00:18:26.320 "compare_and_write": true, 00:18:26.320 "abort": true, 00:18:26.320 "nvme_admin": true, 00:18:26.320 "nvme_io": true 00:18:26.320 }, 00:18:26.320 "memory_domains": [ 00:18:26.320 { 00:18:26.320 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:26.320 "dma_device_type": 0 00:18:26.320 } 00:18:26.320 ], 00:18:26.320 "driver_specific": { 00:18:26.320 "nvme": [ 00:18:26.320 { 00:18:26.320 "trid": { 00:18:26.320 "trtype": "RDMA", 00:18:26.320 "adrfam": "IPv4", 00:18:26.320 "traddr": "192.168.100.8", 00:18:26.320 "trsvcid": "4420", 00:18:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:26.320 }, 00:18:26.320 "ctrlr_data": { 00:18:26.320 "cntlid": 1, 00:18:26.320 "vendor_id": "0x8086", 00:18:26.320 "model_number": "SPDK bdev Controller", 00:18:26.320 "serial_number": "SPDK0", 00:18:26.320 "firmware_revision": "24.01.1", 00:18:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:26.320 "oacs": { 00:18:26.320 "security": 0, 00:18:26.320 "format": 0, 00:18:26.320 "firmware": 0, 00:18:26.320 "ns_manage": 0 00:18:26.320 }, 00:18:26.320 "multi_ctrlr": true, 00:18:26.320 "ana_reporting": false 00:18:26.320 }, 00:18:26.320 "vs": { 00:18:26.320 "nvme_version": "1.3" 00:18:26.320 }, 00:18:26.320 "ns_data": { 00:18:26.320 "id": 1, 00:18:26.320 "can_share": true 00:18:26.320 } 00:18:26.320 } 00:18:26.320 ], 00:18:26.320 "mp_policy": "active_passive" 00:18:26.320 } 00:18:26.320 } 00:18:26.320 ] 00:18:26.320 05:20:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1814225 00:18:26.320 05:20:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:26.320 05:20:42 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.320 Running I/O for 10 seconds... 00:18:27.258 Latency(us) 00:18:27.258 [2024-11-19T04:20:43.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.258 [2024-11-19T04:20:43.816Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.258 Nvme0n1 : 1.00 36800.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:27.258 [2024-11-19T04:20:43.816Z] =================================================================================================================== 00:18:27.258 [2024-11-19T04:20:43.816Z] Total : 36800.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:27.258 00:18:28.195 05:20:44 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:28.455 [2024-11-19T04:20:45.013Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.455 Nvme0n1 : 2.00 37089.00 144.88 0.00 0.00 0.00 0.00 0.00 00:18:28.455 [2024-11-19T04:20:45.013Z] =================================================================================================================== 00:18:28.455 [2024-11-19T04:20:45.013Z] Total : 37089.00 144.88 0.00 0.00 0.00 0.00 0.00 00:18:28.455 00:18:28.455 true 00:18:28.455 05:20:44 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:28.455 05:20:44 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:28.714 05:20:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:28.714 05:20:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:28.714 05:20:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 1814225 00:18:29.282 [2024-11-19T04:20:45.840Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.282 Nvme0n1 : 3.00 37194.67 145.29 0.00 0.00 0.00 0.00 0.00 00:18:29.282 [2024-11-19T04:20:45.840Z] =================================================================================================================== 00:18:29.282 [2024-11-19T04:20:45.840Z] Total : 37194.67 145.29 0.00 0.00 0.00 0.00 0.00 00:18:29.282 00:18:30.661 [2024-11-19T04:20:47.219Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.661 Nvme0n1 : 4.00 37319.75 145.78 0.00 0.00 0.00 0.00 0.00 00:18:30.661 [2024-11-19T04:20:47.219Z] =================================================================================================================== 00:18:30.661 [2024-11-19T04:20:47.219Z] Total : 37319.75 145.78 0.00 0.00 0.00 0.00 0.00 00:18:30.661 00:18:31.598 [2024-11-19T04:20:48.156Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.598 Nvme0n1 : 5.00 37395.00 146.07 0.00 0.00 0.00 0.00 0.00 00:18:31.598 [2024-11-19T04:20:48.156Z] =================================================================================================================== 00:18:31.598 [2024-11-19T04:20:48.156Z] Total : 37395.00 146.07 0.00 0.00 0.00 0.00 0.00 00:18:31.598 00:18:32.536 [2024-11-19T04:20:49.094Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.536 Nvme0n1 : 6.00 37439.67 146.25 0.00 0.00 0.00 0.00 0.00 00:18:32.536 [2024-11-19T04:20:49.094Z] =================================================================================================================== 00:18:32.536 [2024-11-19T04:20:49.094Z] Total : 37439.67 146.25 0.00 0.00 0.00 0.00 0.00 00:18:32.536 00:18:33.473 [2024-11-19T04:20:50.031Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.473 Nvme0n1 : 7.00 37494.29 146.46 0.00 0.00 0.00 0.00 0.00 00:18:33.473 [2024-11-19T04:20:50.031Z] =================================================================================================================== 00:18:33.473 [2024-11-19T04:20:50.031Z] Total : 37494.29 146.46 0.00 0.00 0.00 0.00 0.00 00:18:33.473 00:18:34.411 [2024-11-19T04:20:50.969Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.411 Nvme0n1 : 8.00 37428.38 146.20 0.00 0.00 0.00 0.00 0.00 00:18:34.411 [2024-11-19T04:20:50.969Z] =================================================================================================================== 00:18:34.412 [2024-11-19T04:20:50.970Z] Total : 37428.38 146.20 0.00 0.00 0.00 0.00 0.00 00:18:34.412 00:18:35.350 [2024-11-19T04:20:51.908Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.350 Nvme0n1 : 9.00 37457.89 146.32 0.00 0.00 0.00 0.00 0.00 00:18:35.350 [2024-11-19T04:20:51.908Z] =================================================================================================================== 00:18:35.350 [2024-11-19T04:20:51.908Z] Total : 37457.89 146.32 0.00 0.00 0.00 0.00 0.00 00:18:35.350 00:18:36.288 [2024-11-19T04:20:52.846Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.288 Nvme0n1 : 10.00 37491.30 146.45 0.00 0.00 0.00 0.00 0.00 00:18:36.288 [2024-11-19T04:20:52.846Z] =================================================================================================================== 00:18:36.288 [2024-11-19T04:20:52.846Z] Total : 37491.30 146.45 0.00 0.00 0.00 0.00 0.00 00:18:36.288 00:18:36.288 00:18:36.288 Latency(us) 00:18:36.288 [2024-11-19T04:20:52.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.288 [2024-11-19T04:20:52.846Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.288 Nvme0n1 : 10.00 37492.11 146.45 0.00 0.00 3411.62 2359.30 7864.32 00:18:36.288 [2024-11-19T04:20:52.846Z] =================================================================================================================== 00:18:36.288 [2024-11-19T04:20:52.846Z] Total : 37492.11 146.45 0.00 0.00 3411.62 2359.30 7864.32 00:18:36.288 0 00:18:36.548 05:20:52 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1813947 00:18:36.548 05:20:52 -- common/autotest_common.sh@936 -- # '[' -z 1813947 ']' 00:18:36.548 05:20:52 -- common/autotest_common.sh@940 -- # kill -0 1813947 00:18:36.548 05:20:52 -- common/autotest_common.sh@941 -- # uname 00:18:36.548 05:20:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.548 05:20:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1813947 00:18:36.548 05:20:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:36.548 05:20:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:36.548 05:20:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1813947' 00:18:36.548 killing process with pid 1813947 00:18:36.548 05:20:52 -- common/autotest_common.sh@955 -- # kill 1813947 00:18:36.548 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.548 00:18:36.548 Latency(us) 00:18:36.548 [2024-11-19T04:20:53.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.548 [2024-11-19T04:20:53.106Z] =================================================================================================================== 00:18:36.548 [2024-11-19T04:20:53.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.548 05:20:52 -- common/autotest_common.sh@960 -- # wait 1813947 00:18:36.548 05:20:53 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:36.808 05:20:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:36.808 05:20:53 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:37.067 05:20:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:37.067 05:20:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:37.067 05:20:53 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:37.327 [2024-11-19 05:20:53.653948] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:37.327 05:20:53 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:37.327 05:20:53 -- common/autotest_common.sh@650 -- # local es=0 00:18:37.327 05:20:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:37.327 05:20:53 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:37.327 05:20:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.327 05:20:53 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:37.327 05:20:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.327 05:20:53 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:37.327 05:20:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.327 05:20:53 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:37.327 05:20:53 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:37.327 05:20:53 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:37.327 request: 00:18:37.327 { 00:18:37.327 "uuid": "b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5", 00:18:37.327 "method": "bdev_lvol_get_lvstores", 00:18:37.327 "req_id": 1 00:18:37.327 } 00:18:37.327 Got JSON-RPC error response 00:18:37.327 response: 00:18:37.327 { 00:18:37.327 "code": -19, 00:18:37.327 "message": "No such device" 00:18:37.327 } 00:18:37.327 05:20:53 -- common/autotest_common.sh@653 -- # es=1 00:18:37.327 05:20:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.327 05:20:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.327 05:20:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.327 05:20:53 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:37.586 aio_bdev 00:18:37.586 05:20:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 411fb9f7-5bd4-4d82-ba15-26b6eca2352a 00:18:37.586 05:20:54 -- common/autotest_common.sh@897 -- # local bdev_name=411fb9f7-5bd4-4d82-ba15-26b6eca2352a 00:18:37.586 05:20:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:37.586 05:20:54 -- common/autotest_common.sh@899 -- # local i 00:18:37.586 05:20:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:37.586 05:20:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:37.586 05:20:54 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:37.845 05:20:54 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 411fb9f7-5bd4-4d82-ba15-26b6eca2352a -t 2000 00:18:37.845 [ 00:18:37.845 { 00:18:37.845 "name": "411fb9f7-5bd4-4d82-ba15-26b6eca2352a", 00:18:37.845 "aliases": [ 00:18:37.845 "lvs/lvol" 00:18:37.845 ], 00:18:37.845 "product_name": "Logical Volume", 00:18:37.845 "block_size": 4096, 00:18:37.845 "num_blocks": 38912, 00:18:37.845 "uuid": "411fb9f7-5bd4-4d82-ba15-26b6eca2352a", 00:18:37.845 "assigned_rate_limits": { 00:18:37.845 "rw_ios_per_sec": 0, 00:18:37.845 "rw_mbytes_per_sec": 0, 00:18:37.845 "r_mbytes_per_sec": 0, 00:18:37.845 "w_mbytes_per_sec": 0 00:18:37.845 }, 00:18:37.845 "claimed": false, 00:18:37.845 "zoned": false, 00:18:37.845 "supported_io_types": { 00:18:37.845 "read": true, 00:18:37.845 "write": true, 00:18:37.845 "unmap": true, 00:18:37.845 "write_zeroes": true, 00:18:37.845 "flush": false, 00:18:37.845 "reset": true, 00:18:37.845 "compare": false, 00:18:37.845 "compare_and_write": false, 00:18:37.845 "abort": false, 00:18:37.845 "nvme_admin": false, 00:18:37.845 "nvme_io": false 00:18:37.845 }, 00:18:37.845 "driver_specific": { 00:18:37.845 "lvol": { 00:18:37.845 "lvol_store_uuid": "b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5", 00:18:37.845 "base_bdev": "aio_bdev", 00:18:37.845 "thin_provision": false, 00:18:37.845 "snapshot": false, 00:18:37.845 "clone": false, 00:18:37.845 "esnap_clone": false 00:18:37.845 } 00:18:37.845 } 00:18:37.845 } 00:18:37.845 ] 00:18:37.845 05:20:54 -- common/autotest_common.sh@905 -- # return 0 00:18:37.845 05:20:54 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:37.845 05:20:54 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:38.104 05:20:54 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:38.104 05:20:54 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:38.105 05:20:54 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:38.364 05:20:54 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:38.364 05:20:54 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 411fb9f7-5bd4-4d82-ba15-26b6eca2352a 00:18:38.364 05:20:54 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3a3a72d-ee59-4f1b-a1a0-02ff81de55a5 00:18:38.623 05:20:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:38.882 00:18:38.882 real 0m15.644s 00:18:38.882 user 0m15.589s 00:18:38.882 sys 0m1.157s 00:18:38.882 05:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:38.882 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:18:38.882 ************************************ 00:18:38.882 END TEST lvs_grow_clean 00:18:38.882 ************************************ 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:38.882 05:20:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:38.882 05:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.882 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:18:38.882 ************************************ 00:18:38.882 START TEST lvs_grow_dirty 00:18:38.882 ************************************ 00:18:38.882 05:20:55 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:38.882 05:20:55 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:39.141 05:20:55 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:39.141 05:20:55 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:39.404 05:20:55 -- target/nvmf_lvs_grow.sh@28 -- # lvs=93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:39.404 05:20:55 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:39.404 05:20:55 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:39.404 05:20:55 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:39.404 05:20:55 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:39.404 05:20:55 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 93cc9342-4581-4452-89f8-e8ca87f96b56 lvol 150 00:18:39.684 05:20:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9e101465-c194-4327-8c03-84905ca258d6 00:18:39.684 05:20:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:39.684 05:20:56 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:39.947 [2024-11-19 05:20:56.274218] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:39.947 [2024-11-19 05:20:56.274269] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:39.947 true 00:18:39.947 05:20:56 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:39.947 05:20:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:39.947 05:20:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:39.947 05:20:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:40.206 05:20:56 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9e101465-c194-4327-8c03-84905ca258d6 00:18:40.463 05:20:56 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:40.463 05:20:56 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:40.722 05:20:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1816716 00:18:40.722 05:20:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.722 05:20:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:40.722 05:20:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1816716 /var/tmp/bdevperf.sock 00:18:40.722 05:20:57 -- common/autotest_common.sh@829 -- # '[' -z 1816716 ']' 00:18:40.722 05:20:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.722 05:20:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.722 05:20:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.722 05:20:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.722 05:20:57 -- common/autotest_common.sh@10 -- # set +x 00:18:40.722 [2024-11-19 05:20:57.188438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:40.722 [2024-11-19 05:20:57.188491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816716 ] 00:18:40.722 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.722 [2024-11-19 05:20:57.258214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.981 [2024-11-19 05:20:57.295672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.548 05:20:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.548 05:20:57 -- common/autotest_common.sh@862 -- # return 0 00:18:41.548 05:20:57 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:41.807 Nvme0n1 00:18:41.807 05:20:58 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:42.066 [ 00:18:42.066 { 00:18:42.066 "name": "Nvme0n1", 00:18:42.066 "aliases": [ 00:18:42.066 "9e101465-c194-4327-8c03-84905ca258d6" 00:18:42.066 ], 00:18:42.066 "product_name": "NVMe disk", 00:18:42.066 "block_size": 4096, 00:18:42.066 "num_blocks": 38912, 00:18:42.066 "uuid": "9e101465-c194-4327-8c03-84905ca258d6", 00:18:42.066 "assigned_rate_limits": { 00:18:42.066 "rw_ios_per_sec": 0, 00:18:42.066 "rw_mbytes_per_sec": 0, 00:18:42.066 "r_mbytes_per_sec": 0, 00:18:42.066 "w_mbytes_per_sec": 0 00:18:42.066 }, 00:18:42.066 "claimed": false, 00:18:42.066 "zoned": false, 00:18:42.066 "supported_io_types": { 00:18:42.066 "read": true, 00:18:42.066 "write": true, 00:18:42.066 "unmap": true, 00:18:42.066 "write_zeroes": true, 00:18:42.066 "flush": true, 00:18:42.066 "reset": true, 00:18:42.066 "compare": true, 00:18:42.066 "compare_and_write": true, 00:18:42.066 "abort": true, 00:18:42.066 "nvme_admin": true, 00:18:42.066 "nvme_io": true 00:18:42.066 }, 00:18:42.066 "memory_domains": [ 00:18:42.066 { 00:18:42.066 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:42.066 "dma_device_type": 0 00:18:42.066 } 00:18:42.066 ], 00:18:42.066 "driver_specific": { 00:18:42.066 "nvme": [ 00:18:42.066 { 00:18:42.066 "trid": { 00:18:42.066 "trtype": "RDMA", 00:18:42.066 "adrfam": "IPv4", 00:18:42.066 "traddr": "192.168.100.8", 00:18:42.066 "trsvcid": "4420", 00:18:42.066 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:42.066 }, 00:18:42.066 "ctrlr_data": { 00:18:42.066 "cntlid": 1, 00:18:42.066 "vendor_id": "0x8086", 00:18:42.066 "model_number": "SPDK bdev Controller", 00:18:42.066 "serial_number": "SPDK0", 00:18:42.066 "firmware_revision": "24.01.1", 00:18:42.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:42.066 "oacs": { 00:18:42.066 "security": 0, 00:18:42.066 "format": 0, 00:18:42.066 "firmware": 0, 00:18:42.066 "ns_manage": 0 00:18:42.066 }, 00:18:42.066 "multi_ctrlr": true, 00:18:42.066 "ana_reporting": false 00:18:42.066 }, 00:18:42.066 "vs": { 00:18:42.066 "nvme_version": "1.3" 00:18:42.066 }, 00:18:42.066 "ns_data": { 00:18:42.066 "id": 1, 00:18:42.066 "can_share": true 00:18:42.066 } 00:18:42.066 } 00:18:42.066 ], 00:18:42.066 "mp_policy": "active_passive" 00:18:42.066 } 00:18:42.066 } 00:18:42.066 ] 00:18:42.066 05:20:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1816988 00:18:42.066 05:20:58 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.066 05:20:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:42.066 Running I/O for 10 seconds... 00:18:43.002 Latency(us) 00:18:43.002 [2024-11-19T04:20:59.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.002 [2024-11-19T04:20:59.560Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.002 Nvme0n1 : 1.00 36769.00 143.63 0.00 0.00 0.00 0.00 0.00 00:18:43.002 [2024-11-19T04:20:59.560Z] =================================================================================================================== 00:18:43.002 [2024-11-19T04:20:59.560Z] Total : 36769.00 143.63 0.00 0.00 0.00 0.00 0.00 00:18:43.002 00:18:43.938 05:21:00 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:44.197 [2024-11-19T04:21:00.755Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.197 Nvme0n1 : 2.00 36752.50 143.56 0.00 0.00 0.00 0.00 0.00 00:18:44.197 [2024-11-19T04:21:00.755Z] =================================================================================================================== 00:18:44.197 [2024-11-19T04:21:00.755Z] Total : 36752.50 143.56 0.00 0.00 0.00 0.00 0.00 00:18:44.197 00:18:44.197 true 00:18:44.197 05:21:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:44.197 05:21:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:44.455 05:21:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:44.455 05:21:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:44.455 05:21:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 1816988 00:18:45.023 [2024-11-19T04:21:01.581Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.023 Nvme0n1 : 3.00 36737.67 143.51 0.00 0.00 0.00 0.00 0.00 00:18:45.023 [2024-11-19T04:21:01.581Z] =================================================================================================================== 00:18:45.023 [2024-11-19T04:21:01.581Z] Total : 36737.67 143.51 0.00 0.00 0.00 0.00 0.00 00:18:45.023 00:18:46.400 [2024-11-19T04:21:02.959Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.401 Nvme0n1 : 4.00 36801.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:46.401 [2024-11-19T04:21:02.959Z] =================================================================================================================== 00:18:46.401 [2024-11-19T04:21:02.959Z] Total : 36801.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:46.401 00:18:46.968 [2024-11-19T04:21:03.526Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.968 Nvme0n1 : 5.00 36801.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:46.968 [2024-11-19T04:21:03.526Z] =================================================================================================================== 00:18:46.968 [2024-11-19T04:21:03.526Z] Total : 36801.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:46.968 00:18:48.346 [2024-11-19T04:21:04.904Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.346 Nvme0n1 : 6.00 36863.67 144.00 0.00 0.00 0.00 0.00 0.00 00:18:48.346 [2024-11-19T04:21:04.904Z] =================================================================================================================== 00:18:48.346 [2024-11-19T04:21:04.904Z] Total : 36863.67 144.00 0.00 0.00 0.00 0.00 0.00 00:18:48.346 00:18:49.283 [2024-11-19T04:21:05.841Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.283 Nvme0n1 : 7.00 36955.00 144.36 0.00 0.00 0.00 0.00 0.00 00:18:49.283 [2024-11-19T04:21:05.841Z] =================================================================================================================== 00:18:49.283 [2024-11-19T04:21:05.841Z] Total : 36955.00 144.36 0.00 0.00 0.00 0.00 0.00 00:18:49.283 00:18:50.220 [2024-11-19T04:21:06.778Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.220 Nvme0n1 : 8.00 37019.50 144.61 0.00 0.00 0.00 0.00 0.00 00:18:50.220 [2024-11-19T04:21:06.778Z] =================================================================================================================== 00:18:50.220 [2024-11-19T04:21:06.778Z] Total : 37019.50 144.61 0.00 0.00 0.00 0.00 0.00 00:18:50.220 00:18:51.156 [2024-11-19T04:21:07.714Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.156 Nvme0n1 : 9.00 37087.78 144.87 0.00 0.00 0.00 0.00 0.00 00:18:51.156 [2024-11-19T04:21:07.714Z] =================================================================================================================== 00:18:51.156 [2024-11-19T04:21:07.714Z] Total : 37087.78 144.87 0.00 0.00 0.00 0.00 0.00 00:18:51.156 00:18:52.092 [2024-11-19T04:21:08.650Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.092 Nvme0n1 : 10.00 37149.50 145.12 0.00 0.00 0.00 0.00 0.00 00:18:52.092 [2024-11-19T04:21:08.650Z] =================================================================================================================== 00:18:52.092 [2024-11-19T04:21:08.650Z] Total : 37149.50 145.12 0.00 0.00 0.00 0.00 0.00 00:18:52.092 00:18:52.092 00:18:52.092 Latency(us) 00:18:52.092 [2024-11-19T04:21:08.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.092 [2024-11-19T04:21:08.650Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.092 Nvme0n1 : 10.00 37150.13 145.12 0.00 0.00 3442.99 2136.47 7654.60 00:18:52.092 [2024-11-19T04:21:08.650Z] =================================================================================================================== 00:18:52.092 [2024-11-19T04:21:08.650Z] Total : 37150.13 145.12 0.00 0.00 3442.99 2136.47 7654.60 00:18:52.092 0 00:18:52.092 05:21:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1816716 00:18:52.092 05:21:08 -- common/autotest_common.sh@936 -- # '[' -z 1816716 ']' 00:18:52.092 05:21:08 -- common/autotest_common.sh@940 -- # kill -0 1816716 00:18:52.092 05:21:08 -- common/autotest_common.sh@941 -- # uname 00:18:52.092 05:21:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.092 05:21:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1816716 00:18:52.092 05:21:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:52.092 05:21:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:52.092 05:21:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1816716' 00:18:52.092 killing process with pid 1816716 00:18:52.092 05:21:08 -- common/autotest_common.sh@955 -- # kill 1816716 00:18:52.092 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.092 00:18:52.092 Latency(us) 00:18:52.092 [2024-11-19T04:21:08.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.092 [2024-11-19T04:21:08.650Z] =================================================================================================================== 00:18:52.092 [2024-11-19T04:21:08.650Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.092 05:21:08 -- common/autotest_common.sh@960 -- # wait 1816716 00:18:52.350 05:21:08 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:52.609 05:21:09 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:52.609 05:21:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:52.867 05:21:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:52.867 05:21:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:52.867 05:21:09 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1813377 00:18:52.867 05:21:09 -- target/nvmf_lvs_grow.sh@74 -- # wait 1813377 00:18:52.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1813377 Killed "${NVMF_APP[@]}" "$@" 00:18:52.867 05:21:09 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:52.867 05:21:09 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:52.867 05:21:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:52.867 05:21:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.867 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:18:52.867 05:21:09 -- nvmf/common.sh@469 -- # nvmfpid=1819416 00:18:52.867 05:21:09 -- nvmf/common.sh@470 -- # waitforlisten 1819416 00:18:52.867 05:21:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:52.867 05:21:09 -- common/autotest_common.sh@829 -- # '[' -z 1819416 ']' 00:18:52.867 05:21:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.867 05:21:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.867 05:21:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.867 05:21:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.867 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:18:52.867 [2024-11-19 05:21:09.301911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:52.867 [2024-11-19 05:21:09.301966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.867 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.867 [2024-11-19 05:21:09.374787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.867 [2024-11-19 05:21:09.410991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:52.867 [2024-11-19 05:21:09.411105] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.867 [2024-11-19 05:21:09.411115] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.867 [2024-11-19 05:21:09.411124] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.867 [2024-11-19 05:21:09.411150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.805 05:21:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.805 05:21:10 -- common/autotest_common.sh@862 -- # return 0 00:18:53.805 05:21:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:53.805 05:21:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.805 05:21:10 -- common/autotest_common.sh@10 -- # set +x 00:18:53.805 05:21:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.805 05:21:10 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:53.805 [2024-11-19 05:21:10.319359] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:53.805 [2024-11-19 05:21:10.319454] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:53.805 [2024-11-19 05:21:10.319482] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:53.805 05:21:10 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:53.805 05:21:10 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 9e101465-c194-4327-8c03-84905ca258d6 00:18:53.805 05:21:10 -- common/autotest_common.sh@897 -- # local bdev_name=9e101465-c194-4327-8c03-84905ca258d6 00:18:53.805 05:21:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:53.805 05:21:10 -- common/autotest_common.sh@899 -- # local i 00:18:53.805 05:21:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:53.805 05:21:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:53.806 05:21:10 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:54.064 05:21:10 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9e101465-c194-4327-8c03-84905ca258d6 -t 2000 00:18:54.324 [ 00:18:54.324 { 00:18:54.324 "name": "9e101465-c194-4327-8c03-84905ca258d6", 00:18:54.324 "aliases": [ 00:18:54.324 "lvs/lvol" 00:18:54.324 ], 00:18:54.324 "product_name": "Logical Volume", 00:18:54.324 "block_size": 4096, 00:18:54.324 "num_blocks": 38912, 00:18:54.324 "uuid": "9e101465-c194-4327-8c03-84905ca258d6", 00:18:54.324 "assigned_rate_limits": { 00:18:54.324 "rw_ios_per_sec": 0, 00:18:54.324 "rw_mbytes_per_sec": 0, 00:18:54.324 "r_mbytes_per_sec": 0, 00:18:54.324 "w_mbytes_per_sec": 0 00:18:54.324 }, 00:18:54.324 "claimed": false, 00:18:54.324 "zoned": false, 00:18:54.324 "supported_io_types": { 00:18:54.324 "read": true, 00:18:54.324 "write": true, 00:18:54.324 "unmap": true, 00:18:54.324 "write_zeroes": true, 00:18:54.324 "flush": false, 00:18:54.324 "reset": true, 00:18:54.324 "compare": false, 00:18:54.324 "compare_and_write": false, 00:18:54.324 "abort": false, 00:18:54.324 "nvme_admin": false, 00:18:54.324 "nvme_io": false 00:18:54.324 }, 00:18:54.324 "driver_specific": { 00:18:54.324 "lvol": { 00:18:54.324 "lvol_store_uuid": "93cc9342-4581-4452-89f8-e8ca87f96b56", 00:18:54.324 "base_bdev": "aio_bdev", 00:18:54.324 "thin_provision": false, 00:18:54.324 "snapshot": false, 00:18:54.324 "clone": false, 00:18:54.324 "esnap_clone": false 00:18:54.324 } 00:18:54.324 } 00:18:54.324 } 00:18:54.324 ] 00:18:54.324 05:21:10 -- common/autotest_common.sh@905 -- # return 0 00:18:54.324 05:21:10 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:54.324 05:21:10 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:54.324 05:21:10 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:54.324 05:21:10 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:54.324 05:21:10 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:54.583 05:21:11 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:54.583 05:21:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:54.842 [2024-11-19 05:21:11.219679] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:54.842 05:21:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:54.842 05:21:11 -- common/autotest_common.sh@650 -- # local es=0 00:18:54.842 05:21:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:54.842 05:21:11 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.842 05:21:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.842 05:21:11 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.842 05:21:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.842 05:21:11 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.842 05:21:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.842 05:21:11 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:54.842 05:21:11 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:54.842 05:21:11 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:55.101 request: 00:18:55.101 { 00:18:55.101 "uuid": "93cc9342-4581-4452-89f8-e8ca87f96b56", 00:18:55.101 "method": "bdev_lvol_get_lvstores", 00:18:55.101 "req_id": 1 00:18:55.101 } 00:18:55.101 Got JSON-RPC error response 00:18:55.101 response: 00:18:55.101 { 00:18:55.101 "code": -19, 00:18:55.101 "message": "No such device" 00:18:55.101 } 00:18:55.101 05:21:11 -- common/autotest_common.sh@653 -- # es=1 00:18:55.101 05:21:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.101 05:21:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.101 05:21:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.101 05:21:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:55.101 aio_bdev 00:18:55.101 05:21:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9e101465-c194-4327-8c03-84905ca258d6 00:18:55.101 05:21:11 -- common/autotest_common.sh@897 -- # local bdev_name=9e101465-c194-4327-8c03-84905ca258d6 00:18:55.101 05:21:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:55.101 05:21:11 -- common/autotest_common.sh@899 -- # local i 00:18:55.101 05:21:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:55.101 05:21:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:55.101 05:21:11 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:55.360 05:21:11 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9e101465-c194-4327-8c03-84905ca258d6 -t 2000 00:18:55.619 [ 00:18:55.619 { 00:18:55.619 "name": "9e101465-c194-4327-8c03-84905ca258d6", 00:18:55.619 "aliases": [ 00:18:55.619 "lvs/lvol" 00:18:55.619 ], 00:18:55.619 "product_name": "Logical Volume", 00:18:55.619 "block_size": 4096, 00:18:55.619 "num_blocks": 38912, 00:18:55.619 "uuid": "9e101465-c194-4327-8c03-84905ca258d6", 00:18:55.619 "assigned_rate_limits": { 00:18:55.619 "rw_ios_per_sec": 0, 00:18:55.619 "rw_mbytes_per_sec": 0, 00:18:55.619 "r_mbytes_per_sec": 0, 00:18:55.619 "w_mbytes_per_sec": 0 00:18:55.619 }, 00:18:55.619 "claimed": false, 00:18:55.619 "zoned": false, 00:18:55.619 "supported_io_types": { 00:18:55.619 "read": true, 00:18:55.619 "write": true, 00:18:55.619 "unmap": true, 00:18:55.619 "write_zeroes": true, 00:18:55.619 "flush": false, 00:18:55.619 "reset": true, 00:18:55.619 "compare": false, 00:18:55.619 "compare_and_write": false, 00:18:55.619 "abort": false, 00:18:55.619 "nvme_admin": false, 00:18:55.619 "nvme_io": false 00:18:55.619 }, 00:18:55.619 "driver_specific": { 00:18:55.619 "lvol": { 00:18:55.619 "lvol_store_uuid": "93cc9342-4581-4452-89f8-e8ca87f96b56", 00:18:55.619 "base_bdev": "aio_bdev", 00:18:55.619 "thin_provision": false, 00:18:55.619 "snapshot": false, 00:18:55.619 "clone": false, 00:18:55.619 "esnap_clone": false 00:18:55.619 } 00:18:55.619 } 00:18:55.619 } 00:18:55.619 ] 00:18:55.619 05:21:11 -- common/autotest_common.sh@905 -- # return 0 00:18:55.619 05:21:11 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:55.619 05:21:11 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:55.619 05:21:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:55.619 05:21:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:55.619 05:21:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:55.878 05:21:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:55.878 05:21:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9e101465-c194-4327-8c03-84905ca258d6 00:18:56.137 05:21:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93cc9342-4581-4452-89f8-e8ca87f96b56 00:18:56.396 05:21:12 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:56.396 05:21:12 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:56.396 00:18:56.396 real 0m17.556s 00:18:56.396 user 0m45.264s 00:18:56.396 sys 0m3.316s 00:18:56.396 05:21:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:56.396 05:21:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.396 ************************************ 00:18:56.396 END TEST lvs_grow_dirty 00:18:56.396 ************************************ 00:18:56.656 05:21:12 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:56.656 05:21:12 -- common/autotest_common.sh@806 -- # type=--id 00:18:56.656 05:21:12 -- common/autotest_common.sh@807 -- # id=0 00:18:56.656 05:21:12 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:56.656 05:21:12 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:56.656 05:21:12 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:56.656 05:21:12 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:56.656 05:21:12 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:56.656 05:21:12 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:56.656 nvmf_trace.0 00:18:56.656 05:21:13 -- common/autotest_common.sh@821 -- # return 0 00:18:56.656 05:21:13 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:56.656 05:21:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:56.656 05:21:13 -- nvmf/common.sh@116 -- # sync 00:18:56.656 05:21:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:56.656 05:21:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:56.656 05:21:13 -- nvmf/common.sh@119 -- # set +e 00:18:56.656 05:21:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:56.656 05:21:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:56.656 rmmod nvme_rdma 00:18:56.656 rmmod nvme_fabrics 00:18:56.656 05:21:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:56.656 05:21:13 -- nvmf/common.sh@123 -- # set -e 00:18:56.656 05:21:13 -- nvmf/common.sh@124 -- # return 0 00:18:56.656 05:21:13 -- nvmf/common.sh@477 -- # '[' -n 1819416 ']' 00:18:56.656 05:21:13 -- nvmf/common.sh@478 -- # killprocess 1819416 00:18:56.656 05:21:13 -- common/autotest_common.sh@936 -- # '[' -z 1819416 ']' 00:18:56.656 05:21:13 -- common/autotest_common.sh@940 -- # kill -0 1819416 00:18:56.656 05:21:13 -- common/autotest_common.sh@941 -- # uname 00:18:56.656 05:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:56.656 05:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1819416 00:18:56.656 05:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:56.656 05:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:56.656 05:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1819416' 00:18:56.656 killing process with pid 1819416 00:18:56.656 05:21:13 -- common/autotest_common.sh@955 -- # kill 1819416 00:18:56.656 05:21:13 -- common/autotest_common.sh@960 -- # wait 1819416 00:18:56.916 05:21:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:56.916 05:21:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:56.916 00:18:56.916 real 0m41.791s 00:18:56.916 user 1m7.181s 00:18:56.916 sys 0m10.165s 00:18:56.916 05:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:56.916 05:21:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.916 ************************************ 00:18:56.916 END TEST nvmf_lvs_grow 00:18:56.916 ************************************ 00:18:56.916 05:21:13 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:56.916 05:21:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:56.916 05:21:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.916 05:21:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.916 ************************************ 00:18:56.916 START TEST nvmf_bdev_io_wait 00:18:56.916 ************************************ 00:18:56.916 05:21:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:56.916 * Looking for test storage... 00:18:56.916 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:56.916 05:21:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:56.916 05:21:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:56.916 05:21:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:57.174 05:21:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:57.174 05:21:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:57.174 05:21:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:57.174 05:21:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:57.174 05:21:13 -- scripts/common.sh@335 -- # IFS=.-: 00:18:57.174 05:21:13 -- scripts/common.sh@335 -- # read -ra ver1 00:18:57.174 05:21:13 -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.174 05:21:13 -- scripts/common.sh@336 -- # read -ra ver2 00:18:57.174 05:21:13 -- scripts/common.sh@337 -- # local 'op=<' 00:18:57.174 05:21:13 -- scripts/common.sh@339 -- # ver1_l=2 00:18:57.174 05:21:13 -- scripts/common.sh@340 -- # ver2_l=1 00:18:57.174 05:21:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:57.174 05:21:13 -- scripts/common.sh@343 -- # case "$op" in 00:18:57.174 05:21:13 -- scripts/common.sh@344 -- # : 1 00:18:57.174 05:21:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:57.174 05:21:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.174 05:21:13 -- scripts/common.sh@364 -- # decimal 1 00:18:57.174 05:21:13 -- scripts/common.sh@352 -- # local d=1 00:18:57.174 05:21:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.174 05:21:13 -- scripts/common.sh@354 -- # echo 1 00:18:57.174 05:21:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:57.174 05:21:13 -- scripts/common.sh@365 -- # decimal 2 00:18:57.174 05:21:13 -- scripts/common.sh@352 -- # local d=2 00:18:57.174 05:21:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.174 05:21:13 -- scripts/common.sh@354 -- # echo 2 00:18:57.174 05:21:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:57.174 05:21:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:57.174 05:21:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:57.174 05:21:13 -- scripts/common.sh@367 -- # return 0 00:18:57.174 05:21:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.174 05:21:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:57.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.174 --rc genhtml_branch_coverage=1 00:18:57.174 --rc genhtml_function_coverage=1 00:18:57.174 --rc genhtml_legend=1 00:18:57.174 --rc geninfo_all_blocks=1 00:18:57.174 --rc geninfo_unexecuted_blocks=1 00:18:57.174 00:18:57.174 ' 00:18:57.174 05:21:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:57.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.174 --rc genhtml_branch_coverage=1 00:18:57.174 --rc genhtml_function_coverage=1 00:18:57.174 --rc genhtml_legend=1 00:18:57.174 --rc geninfo_all_blocks=1 00:18:57.174 --rc geninfo_unexecuted_blocks=1 00:18:57.174 00:18:57.174 ' 00:18:57.174 05:21:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:57.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.174 --rc genhtml_branch_coverage=1 00:18:57.174 --rc genhtml_function_coverage=1 00:18:57.174 --rc genhtml_legend=1 00:18:57.174 --rc geninfo_all_blocks=1 00:18:57.174 --rc geninfo_unexecuted_blocks=1 00:18:57.174 00:18:57.174 ' 00:18:57.174 05:21:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:57.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.174 --rc genhtml_branch_coverage=1 00:18:57.174 --rc genhtml_function_coverage=1 00:18:57.174 --rc genhtml_legend=1 00:18:57.174 --rc geninfo_all_blocks=1 00:18:57.174 --rc geninfo_unexecuted_blocks=1 00:18:57.174 00:18:57.174 ' 00:18:57.174 05:21:13 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.174 05:21:13 -- nvmf/common.sh@7 -- # uname -s 00:18:57.174 05:21:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.174 05:21:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.174 05:21:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.174 05:21:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.174 05:21:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.174 05:21:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.174 05:21:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.174 05:21:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.174 05:21:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.174 05:21:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.174 05:21:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:57.174 05:21:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:57.174 05:21:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.174 05:21:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.174 05:21:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.174 05:21:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:57.174 05:21:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.174 05:21:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.174 05:21:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.174 05:21:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.174 05:21:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.174 05:21:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.174 05:21:13 -- paths/export.sh@5 -- # export PATH 00:18:57.175 05:21:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.175 05:21:13 -- nvmf/common.sh@46 -- # : 0 00:18:57.175 05:21:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:57.175 05:21:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:57.175 05:21:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:57.175 05:21:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.175 05:21:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.175 05:21:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:57.175 05:21:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:57.175 05:21:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:57.175 05:21:13 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.175 05:21:13 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.175 05:21:13 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:57.175 05:21:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:57.175 05:21:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.175 05:21:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:57.175 05:21:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:57.175 05:21:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:57.175 05:21:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.175 05:21:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.175 05:21:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.175 05:21:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:57.175 05:21:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:57.175 05:21:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:57.175 05:21:13 -- common/autotest_common.sh@10 -- # set +x 00:19:03.743 05:21:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:03.743 05:21:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:03.743 05:21:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:03.743 05:21:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:03.743 05:21:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:03.743 05:21:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:03.743 05:21:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:03.743 05:21:20 -- nvmf/common.sh@294 -- # net_devs=() 00:19:03.743 05:21:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:03.743 05:21:20 -- nvmf/common.sh@295 -- # e810=() 00:19:03.743 05:21:20 -- nvmf/common.sh@295 -- # local -ga e810 00:19:03.743 05:21:20 -- nvmf/common.sh@296 -- # x722=() 00:19:03.743 05:21:20 -- nvmf/common.sh@296 -- # local -ga x722 00:19:03.743 05:21:20 -- nvmf/common.sh@297 -- # mlx=() 00:19:03.743 05:21:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:03.743 05:21:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.743 05:21:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:03.743 05:21:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:03.743 05:21:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:03.743 05:21:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:03.743 05:21:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:03.743 05:21:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:03.743 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:03.743 05:21:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:03.743 05:21:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:03.743 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:03.743 05:21:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:03.743 05:21:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:03.743 05:21:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.743 05:21:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:03.743 05:21:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.743 05:21:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:03.743 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:03.743 05:21:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.743 05:21:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.743 05:21:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:03.743 05:21:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.743 05:21:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:03.743 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:03.743 05:21:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.743 05:21:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:03.743 05:21:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:03.743 05:21:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:03.743 05:21:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:03.743 05:21:20 -- nvmf/common.sh@57 -- # uname 00:19:03.743 05:21:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:03.743 05:21:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:03.743 05:21:20 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:03.743 05:21:20 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:03.743 05:21:20 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:03.743 05:21:20 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:03.743 05:21:20 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:03.743 05:21:20 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:03.743 05:21:20 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:03.743 05:21:20 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:03.743 05:21:20 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:03.743 05:21:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:03.743 05:21:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:03.743 05:21:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:03.743 05:21:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:03.743 05:21:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:03.743 05:21:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:03.743 05:21:20 -- nvmf/common.sh@104 -- # continue 2 00:19:03.743 05:21:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.743 05:21:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:03.743 05:21:20 -- nvmf/common.sh@104 -- # continue 2 00:19:03.743 05:21:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:03.743 05:21:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:03.743 05:21:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:03.743 05:21:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:03.743 05:21:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:03.743 05:21:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:03.743 05:21:20 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:03.743 05:21:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:03.743 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:03.743 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:03.743 altname enp217s0f0np0 00:19:03.743 altname ens818f0np0 00:19:03.743 inet 192.168.100.8/24 scope global mlx_0_0 00:19:03.743 valid_lft forever preferred_lft forever 00:19:03.743 05:21:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:03.743 05:21:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:03.743 05:21:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:03.743 05:21:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:03.743 05:21:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:03.743 05:21:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:03.743 05:21:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:03.743 05:21:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:03.743 05:21:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:03.743 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:03.743 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:03.743 altname enp217s0f1np1 00:19:03.743 altname ens818f1np1 00:19:03.743 inet 192.168.100.9/24 scope global mlx_0_1 00:19:03.743 valid_lft forever preferred_lft forever 00:19:03.743 05:21:20 -- nvmf/common.sh@410 -- # return 0 00:19:03.743 05:21:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:03.743 05:21:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:03.744 05:21:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:03.744 05:21:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:03.744 05:21:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:03.744 05:21:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:03.744 05:21:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:03.744 05:21:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:03.744 05:21:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:03.744 05:21:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:03.744 05:21:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:03.744 05:21:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.744 05:21:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:03.744 05:21:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:03.744 05:21:20 -- nvmf/common.sh@104 -- # continue 2 00:19:03.744 05:21:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:03.744 05:21:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.744 05:21:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:03.744 05:21:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.744 05:21:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:03.744 05:21:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:03.744 05:21:20 -- nvmf/common.sh@104 -- # continue 2 00:19:03.744 05:21:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:03.744 05:21:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:03.744 05:21:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:03.744 05:21:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:03.744 05:21:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:03.744 05:21:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:03.744 05:21:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:03.744 05:21:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:03.744 05:21:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:03.744 05:21:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:03.744 05:21:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:03.744 05:21:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:03.744 05:21:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:03.744 192.168.100.9' 00:19:03.744 05:21:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:03.744 192.168.100.9' 00:19:03.744 05:21:20 -- nvmf/common.sh@445 -- # head -n 1 00:19:03.744 05:21:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:03.744 05:21:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:03.744 192.168.100.9' 00:19:03.744 05:21:20 -- nvmf/common.sh@446 -- # tail -n +2 00:19:03.744 05:21:20 -- nvmf/common.sh@446 -- # head -n 1 00:19:03.744 05:21:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:03.744 05:21:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:03.744 05:21:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:03.744 05:21:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:03.744 05:21:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:03.744 05:21:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:03.744 05:21:20 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:03.744 05:21:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:03.744 05:21:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:03.744 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:03.744 05:21:20 -- nvmf/common.sh@469 -- # nvmfpid=1823470 00:19:03.744 05:21:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:03.744 05:21:20 -- nvmf/common.sh@470 -- # waitforlisten 1823470 00:19:03.744 05:21:20 -- common/autotest_common.sh@829 -- # '[' -z 1823470 ']' 00:19:03.744 05:21:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.744 05:21:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.744 05:21:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.744 05:21:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.744 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.004 [2024-11-19 05:21:20.311922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.004 [2024-11-19 05:21:20.311975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.004 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.004 [2024-11-19 05:21:20.383227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:04.004 [2024-11-19 05:21:20.422124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:04.004 [2024-11-19 05:21:20.422241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.004 [2024-11-19 05:21:20.422251] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.004 [2024-11-19 05:21:20.422262] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.004 [2024-11-19 05:21:20.422318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.004 [2024-11-19 05:21:20.422411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.004 [2024-11-19 05:21:20.422474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.004 [2024-11-19 05:21:20.422476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.004 05:21:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.004 05:21:20 -- common/autotest_common.sh@862 -- # return 0 00:19:04.004 05:21:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:04.004 05:21:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.004 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.004 05:21:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.004 05:21:20 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:04.004 05:21:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.004 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.004 05:21:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.004 05:21:20 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:04.004 05:21:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.004 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.004 05:21:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.004 05:21:20 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:04.004 05:21:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.004 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.264 [2024-11-19 05:21:20.591492] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x97d150/0x981640) succeed. 00:19:04.264 [2024-11-19 05:21:20.600366] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x97e740/0x9c2ce0) succeed. 00:19:04.264 05:21:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:04.264 05:21:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.264 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.264 Malloc0 00:19:04.264 05:21:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:04.264 05:21:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.264 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.264 05:21:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:04.264 05:21:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.264 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.264 05:21:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:04.264 05:21:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.264 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.264 [2024-11-19 05:21:20.777681] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:04.264 05:21:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1823507 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:04.264 05:21:20 -- target/bdev_io_wait.sh@30 -- # READ_PID=1823509 00:19:04.264 05:21:20 -- nvmf/common.sh@520 -- # config=() 00:19:04.264 05:21:20 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.264 05:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.265 { 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme$subsystem", 00:19:04.265 "trtype": "$TEST_TRANSPORT", 00:19:04.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "$NVMF_PORT", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.265 "hdgst": ${hdgst:-false}, 00:19:04.265 "ddgst": ${ddgst:-false} 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 } 00:19:04.265 EOF 00:19:04.265 )") 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1823511 00:19:04.265 05:21:20 -- nvmf/common.sh@520 -- # config=() 00:19:04.265 05:21:20 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.265 05:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.265 { 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme$subsystem", 00:19:04.265 "trtype": "$TEST_TRANSPORT", 00:19:04.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "$NVMF_PORT", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.265 "hdgst": ${hdgst:-false}, 00:19:04.265 "ddgst": ${ddgst:-false} 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 } 00:19:04.265 EOF 00:19:04.265 )") 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1823514 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # cat 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@35 -- # sync 00:19:04.265 05:21:20 -- nvmf/common.sh@520 -- # config=() 00:19:04.265 05:21:20 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.265 05:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.265 { 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme$subsystem", 00:19:04.265 "trtype": "$TEST_TRANSPORT", 00:19:04.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "$NVMF_PORT", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.265 "hdgst": ${hdgst:-false}, 00:19:04.265 "ddgst": ${ddgst:-false} 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 } 00:19:04.265 EOF 00:19:04.265 )") 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:04.265 05:21:20 -- nvmf/common.sh@520 -- # config=() 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # cat 00:19:04.265 05:21:20 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.265 05:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.265 { 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme$subsystem", 00:19:04.265 "trtype": "$TEST_TRANSPORT", 00:19:04.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "$NVMF_PORT", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.265 "hdgst": ${hdgst:-false}, 00:19:04.265 "ddgst": ${ddgst:-false} 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 } 00:19:04.265 EOF 00:19:04.265 )") 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # cat 00:19:04.265 05:21:20 -- target/bdev_io_wait.sh@37 -- # wait 1823507 00:19:04.265 05:21:20 -- nvmf/common.sh@542 -- # cat 00:19:04.265 05:21:20 -- nvmf/common.sh@544 -- # jq . 00:19:04.265 05:21:20 -- nvmf/common.sh@544 -- # jq . 00:19:04.265 05:21:20 -- nvmf/common.sh@544 -- # jq . 00:19:04.265 05:21:20 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.265 05:21:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme1", 00:19:04.265 "trtype": "rdma", 00:19:04.265 "traddr": "192.168.100.8", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "4420", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.265 "hdgst": false, 00:19:04.265 "ddgst": false 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 }' 00:19:04.265 05:21:20 -- nvmf/common.sh@544 -- # jq . 00:19:04.265 05:21:20 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.265 05:21:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme1", 00:19:04.265 "trtype": "rdma", 00:19:04.265 "traddr": "192.168.100.8", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "4420", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.265 "hdgst": false, 00:19:04.265 "ddgst": false 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 }' 00:19:04.265 05:21:20 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.265 05:21:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme1", 00:19:04.265 "trtype": "rdma", 00:19:04.265 "traddr": "192.168.100.8", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "4420", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.265 "hdgst": false, 00:19:04.265 "ddgst": false 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 }' 00:19:04.265 05:21:20 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.265 05:21:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.265 "params": { 00:19:04.265 "name": "Nvme1", 00:19:04.265 "trtype": "rdma", 00:19:04.265 "traddr": "192.168.100.8", 00:19:04.265 "adrfam": "ipv4", 00:19:04.265 "trsvcid": "4420", 00:19:04.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.265 "hdgst": false, 00:19:04.265 "ddgst": false 00:19:04.265 }, 00:19:04.265 "method": "bdev_nvme_attach_controller" 00:19:04.265 }' 00:19:04.265 [2024-11-19 05:21:20.823641] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.265 [2024-11-19 05:21:20.823696] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:04.524 [2024-11-19 05:21:20.828720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.524 [2024-11-19 05:21:20.828768] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:04.524 [2024-11-19 05:21:20.829104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.524 [2024-11-19 05:21:20.829149] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:04.524 [2024-11-19 05:21:20.830746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.524 [2024-11-19 05:21:20.830791] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:04.524 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.524 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.524 [2024-11-19 05:21:21.006754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.524 [2024-11-19 05:21:21.030198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:04.524 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.783 [2024-11-19 05:21:21.107362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.783 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.783 [2024-11-19 05:21:21.135551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:04.783 [2024-11-19 05:21:21.145903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.783 [2024-11-19 05:21:21.167261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:04.783 [2024-11-19 05:21:21.263322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.783 [2024-11-19 05:21:21.290971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:05.042 Running I/O for 1 seconds... 00:19:05.042 Running I/O for 1 seconds... 00:19:05.042 Running I/O for 1 seconds... 00:19:05.042 Running I/O for 1 seconds... 00:19:05.977 00:19:05.977 Latency(us) 00:19:05.977 [2024-11-19T04:21:22.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.977 [2024-11-19T04:21:22.535Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:05.977 Nvme1n1 : 1.00 20134.50 78.65 0.00 0.00 6340.69 3407.87 15099.49 00:19:05.977 [2024-11-19T04:21:22.535Z] =================================================================================================================== 00:19:05.977 [2024-11-19T04:21:22.535Z] Total : 20134.50 78.65 0.00 0.00 6340.69 3407.87 15099.49 00:19:05.977 00:19:05.977 Latency(us) 00:19:05.977 [2024-11-19T04:21:22.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.977 [2024-11-19T04:21:22.535Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:05.977 Nvme1n1 : 1.01 15309.55 59.80 0.00 0.00 8334.95 5269.09 18035.51 00:19:05.978 [2024-11-19T04:21:22.536Z] =================================================================================================================== 00:19:05.978 [2024-11-19T04:21:22.536Z] Total : 15309.55 59.80 0.00 0.00 8334.95 5269.09 18035.51 00:19:05.978 00:19:05.978 Latency(us) 00:19:05.978 [2024-11-19T04:21:22.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.978 [2024-11-19T04:21:22.536Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:05.978 Nvme1n1 : 1.00 266227.03 1039.95 0.00 0.00 479.49 190.05 1631.85 00:19:05.978 [2024-11-19T04:21:22.536Z] =================================================================================================================== 00:19:05.978 [2024-11-19T04:21:22.536Z] Total : 266227.03 1039.95 0.00 0.00 479.49 190.05 1631.85 00:19:05.978 00:19:05.978 Latency(us) 00:19:05.978 [2024-11-19T04:21:22.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.978 [2024-11-19T04:21:22.536Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:05.978 Nvme1n1 : 1.00 14283.44 55.79 0.00 0.00 8940.14 3879.73 18769.51 00:19:05.978 [2024-11-19T04:21:22.536Z] =================================================================================================================== 00:19:05.978 [2024-11-19T04:21:22.536Z] Total : 14283.44 55.79 0.00 0.00 8940.14 3879.73 18769.51 00:19:06.237 05:21:22 -- target/bdev_io_wait.sh@38 -- # wait 1823509 00:19:06.237 05:21:22 -- target/bdev_io_wait.sh@39 -- # wait 1823511 00:19:06.237 05:21:22 -- target/bdev_io_wait.sh@40 -- # wait 1823514 00:19:06.237 05:21:22 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.237 05:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.237 05:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.237 05:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.237 05:21:22 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:06.237 05:21:22 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:06.237 05:21:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:06.237 05:21:22 -- nvmf/common.sh@116 -- # sync 00:19:06.505 05:21:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:06.505 05:21:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:06.505 05:21:22 -- nvmf/common.sh@119 -- # set +e 00:19:06.505 05:21:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:06.506 05:21:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:06.506 rmmod nvme_rdma 00:19:06.506 rmmod nvme_fabrics 00:19:06.506 05:21:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:06.506 05:21:22 -- nvmf/common.sh@123 -- # set -e 00:19:06.506 05:21:22 -- nvmf/common.sh@124 -- # return 0 00:19:06.506 05:21:22 -- nvmf/common.sh@477 -- # '[' -n 1823470 ']' 00:19:06.506 05:21:22 -- nvmf/common.sh@478 -- # killprocess 1823470 00:19:06.506 05:21:22 -- common/autotest_common.sh@936 -- # '[' -z 1823470 ']' 00:19:06.506 05:21:22 -- common/autotest_common.sh@940 -- # kill -0 1823470 00:19:06.506 05:21:22 -- common/autotest_common.sh@941 -- # uname 00:19:06.506 05:21:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.506 05:21:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1823470 00:19:06.506 05:21:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:06.506 05:21:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:06.506 05:21:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1823470' 00:19:06.506 killing process with pid 1823470 00:19:06.506 05:21:22 -- common/autotest_common.sh@955 -- # kill 1823470 00:19:06.506 05:21:22 -- common/autotest_common.sh@960 -- # wait 1823470 00:19:06.766 05:21:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:06.766 05:21:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:06.766 00:19:06.766 real 0m9.790s 00:19:06.766 user 0m17.884s 00:19:06.766 sys 0m6.520s 00:19:06.766 05:21:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:06.766 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:19:06.766 ************************************ 00:19:06.766 END TEST nvmf_bdev_io_wait 00:19:06.766 ************************************ 00:19:06.766 05:21:23 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:06.766 05:21:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:06.766 05:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.766 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:19:06.766 ************************************ 00:19:06.766 START TEST nvmf_queue_depth 00:19:06.766 ************************************ 00:19:06.766 05:21:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:06.766 * Looking for test storage... 00:19:06.766 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:06.766 05:21:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:06.766 05:21:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:06.766 05:21:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:07.027 05:21:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:07.027 05:21:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:07.027 05:21:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:07.027 05:21:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:07.027 05:21:23 -- scripts/common.sh@335 -- # IFS=.-: 00:19:07.027 05:21:23 -- scripts/common.sh@335 -- # read -ra ver1 00:19:07.027 05:21:23 -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.027 05:21:23 -- scripts/common.sh@336 -- # read -ra ver2 00:19:07.027 05:21:23 -- scripts/common.sh@337 -- # local 'op=<' 00:19:07.027 05:21:23 -- scripts/common.sh@339 -- # ver1_l=2 00:19:07.027 05:21:23 -- scripts/common.sh@340 -- # ver2_l=1 00:19:07.027 05:21:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:07.027 05:21:23 -- scripts/common.sh@343 -- # case "$op" in 00:19:07.027 05:21:23 -- scripts/common.sh@344 -- # : 1 00:19:07.027 05:21:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:07.027 05:21:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.027 05:21:23 -- scripts/common.sh@364 -- # decimal 1 00:19:07.027 05:21:23 -- scripts/common.sh@352 -- # local d=1 00:19:07.027 05:21:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.027 05:21:23 -- scripts/common.sh@354 -- # echo 1 00:19:07.027 05:21:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:07.027 05:21:23 -- scripts/common.sh@365 -- # decimal 2 00:19:07.027 05:21:23 -- scripts/common.sh@352 -- # local d=2 00:19:07.027 05:21:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.027 05:21:23 -- scripts/common.sh@354 -- # echo 2 00:19:07.027 05:21:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:07.027 05:21:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:07.027 05:21:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:07.027 05:21:23 -- scripts/common.sh@367 -- # return 0 00:19:07.027 05:21:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.027 05:21:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.027 --rc genhtml_branch_coverage=1 00:19:07.027 --rc genhtml_function_coverage=1 00:19:07.027 --rc genhtml_legend=1 00:19:07.027 --rc geninfo_all_blocks=1 00:19:07.027 --rc geninfo_unexecuted_blocks=1 00:19:07.027 00:19:07.027 ' 00:19:07.027 05:21:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.027 --rc genhtml_branch_coverage=1 00:19:07.027 --rc genhtml_function_coverage=1 00:19:07.027 --rc genhtml_legend=1 00:19:07.027 --rc geninfo_all_blocks=1 00:19:07.027 --rc geninfo_unexecuted_blocks=1 00:19:07.027 00:19:07.027 ' 00:19:07.027 05:21:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.027 --rc genhtml_branch_coverage=1 00:19:07.027 --rc genhtml_function_coverage=1 00:19:07.027 --rc genhtml_legend=1 00:19:07.027 --rc geninfo_all_blocks=1 00:19:07.027 --rc geninfo_unexecuted_blocks=1 00:19:07.027 00:19:07.027 ' 00:19:07.027 05:21:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.027 --rc genhtml_branch_coverage=1 00:19:07.027 --rc genhtml_function_coverage=1 00:19:07.027 --rc genhtml_legend=1 00:19:07.027 --rc geninfo_all_blocks=1 00:19:07.027 --rc geninfo_unexecuted_blocks=1 00:19:07.027 00:19:07.027 ' 00:19:07.027 05:21:23 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.027 05:21:23 -- nvmf/common.sh@7 -- # uname -s 00:19:07.027 05:21:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.027 05:21:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.027 05:21:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.027 05:21:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.027 05:21:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.027 05:21:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.027 05:21:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.027 05:21:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.027 05:21:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.027 05:21:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.027 05:21:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:07.027 05:21:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:07.027 05:21:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.027 05:21:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.028 05:21:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.028 05:21:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:07.028 05:21:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.028 05:21:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.028 05:21:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.028 05:21:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.028 05:21:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.028 05:21:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.028 05:21:23 -- paths/export.sh@5 -- # export PATH 00:19:07.028 05:21:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.028 05:21:23 -- nvmf/common.sh@46 -- # : 0 00:19:07.028 05:21:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:07.028 05:21:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:07.028 05:21:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:07.028 05:21:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.028 05:21:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.028 05:21:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:07.028 05:21:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:07.028 05:21:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:07.028 05:21:23 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:07.028 05:21:23 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:07.028 05:21:23 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.028 05:21:23 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:07.028 05:21:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:07.028 05:21:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.028 05:21:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:07.028 05:21:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:07.028 05:21:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:07.028 05:21:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.028 05:21:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.028 05:21:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.028 05:21:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:07.028 05:21:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:07.028 05:21:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:07.028 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:19:13.610 05:21:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.610 05:21:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.610 05:21:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.610 05:21:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.610 05:21:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.610 05:21:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.610 05:21:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.610 05:21:30 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.610 05:21:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.610 05:21:30 -- nvmf/common.sh@295 -- # e810=() 00:19:13.610 05:21:30 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.610 05:21:30 -- nvmf/common.sh@296 -- # x722=() 00:19:13.610 05:21:30 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.610 05:21:30 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.610 05:21:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.610 05:21:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.610 05:21:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.610 05:21:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:13.610 05:21:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:13.610 05:21:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:13.610 05:21:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:13.610 05:21:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:13.610 05:21:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.610 05:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.610 05:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:13.610 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:13.610 05:21:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:13.610 05:21:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:13.610 05:21:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:13.610 05:21:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:13.610 05:21:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:13.610 05:21:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.611 05:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.611 05:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:13.611 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:13.611 05:21:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.611 05:21:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.611 05:21:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.611 05:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.611 05:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.611 05:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.611 05:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:13.611 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:13.611 05:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.611 05:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.611 05:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.611 05:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.611 05:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.611 05:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:13.611 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:13.611 05:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.611 05:21:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.611 05:21:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.611 05:21:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:13.611 05:21:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:13.611 05:21:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:13.611 05:21:30 -- nvmf/common.sh@57 -- # uname 00:19:13.611 05:21:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:13.611 05:21:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:13.611 05:21:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:13.611 05:21:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:13.611 05:21:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:13.611 05:21:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:13.611 05:21:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:13.611 05:21:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:13.611 05:21:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:13.611 05:21:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:13.611 05:21:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:13.611 05:21:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.611 05:21:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:13.611 05:21:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:13.611 05:21:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.871 05:21:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:13.871 05:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.871 05:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.871 05:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.871 05:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:13.871 05:21:30 -- nvmf/common.sh@104 -- # continue 2 00:19:13.871 05:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.871 05:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.871 05:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.871 05:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.871 05:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.871 05:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:13.871 05:21:30 -- nvmf/common.sh@104 -- # continue 2 00:19:13.871 05:21:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:13.871 05:21:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:13.871 05:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:13.871 05:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:13.871 05:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.871 05:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.871 05:21:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:13.871 05:21:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:13.871 05:21:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:13.872 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.872 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:13.872 altname enp217s0f0np0 00:19:13.872 altname ens818f0np0 00:19:13.872 inet 192.168.100.8/24 scope global mlx_0_0 00:19:13.872 valid_lft forever preferred_lft forever 00:19:13.872 05:21:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:13.872 05:21:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:13.872 05:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.872 05:21:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:13.872 05:21:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:13.872 05:21:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:13.872 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.872 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:13.872 altname enp217s0f1np1 00:19:13.872 altname ens818f1np1 00:19:13.872 inet 192.168.100.9/24 scope global mlx_0_1 00:19:13.872 valid_lft forever preferred_lft forever 00:19:13.872 05:21:30 -- nvmf/common.sh@410 -- # return 0 00:19:13.872 05:21:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:13.872 05:21:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:13.872 05:21:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:13.872 05:21:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:13.872 05:21:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:13.872 05:21:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.872 05:21:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:13.872 05:21:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:13.872 05:21:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.872 05:21:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:13.872 05:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.872 05:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.872 05:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.872 05:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:13.872 05:21:30 -- nvmf/common.sh@104 -- # continue 2 00:19:13.872 05:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.872 05:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.872 05:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.872 05:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.872 05:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.872 05:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:13.872 05:21:30 -- nvmf/common.sh@104 -- # continue 2 00:19:13.872 05:21:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:13.872 05:21:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:13.872 05:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.872 05:21:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:13.872 05:21:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:13.872 05:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.872 05:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.872 05:21:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:13.872 192.168.100.9' 00:19:13.872 05:21:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:13.872 192.168.100.9' 00:19:13.872 05:21:30 -- nvmf/common.sh@445 -- # head -n 1 00:19:13.872 05:21:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:13.872 05:21:30 -- nvmf/common.sh@446 -- # head -n 1 00:19:13.872 05:21:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:13.872 192.168.100.9' 00:19:13.872 05:21:30 -- nvmf/common.sh@446 -- # tail -n +2 00:19:13.872 05:21:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:13.872 05:21:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:13.872 05:21:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:13.872 05:21:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:13.872 05:21:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:13.872 05:21:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:13.872 05:21:30 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:13.872 05:21:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:13.872 05:21:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.872 05:21:30 -- common/autotest_common.sh@10 -- # set +x 00:19:13.872 05:21:30 -- nvmf/common.sh@469 -- # nvmfpid=1827247 00:19:13.872 05:21:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:13.872 05:21:30 -- nvmf/common.sh@470 -- # waitforlisten 1827247 00:19:13.872 05:21:30 -- common/autotest_common.sh@829 -- # '[' -z 1827247 ']' 00:19:13.872 05:21:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.872 05:21:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.872 05:21:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.872 05:21:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.872 05:21:30 -- common/autotest_common.sh@10 -- # set +x 00:19:13.872 [2024-11-19 05:21:30.399326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:13.872 [2024-11-19 05:21:30.399375] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.872 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.132 [2024-11-19 05:21:30.472430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.132 [2024-11-19 05:21:30.510001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:14.132 [2024-11-19 05:21:30.510108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.132 [2024-11-19 05:21:30.510119] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.132 [2024-11-19 05:21:30.510128] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.132 [2024-11-19 05:21:30.510150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.703 05:21:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.703 05:21:31 -- common/autotest_common.sh@862 -- # return 0 00:19:14.703 05:21:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:14.703 05:21:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.703 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.703 05:21:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.703 05:21:31 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:14.703 05:21:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.703 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.963 [2024-11-19 05:21:31.272262] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18cd3a0/0x18d1890) succeed. 00:19:14.963 [2024-11-19 05:21:31.280861] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18ce8a0/0x1912f30) succeed. 00:19:14.963 05:21:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.963 05:21:31 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:14.963 05:21:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.963 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.963 Malloc0 00:19:14.963 05:21:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.963 05:21:31 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.963 05:21:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.963 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.963 05:21:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.963 05:21:31 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:14.963 05:21:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.963 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.963 05:21:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.964 05:21:31 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:14.964 05:21:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.964 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.964 [2024-11-19 05:21:31.364118] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:14.964 05:21:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.964 05:21:31 -- target/queue_depth.sh@30 -- # bdevperf_pid=1827529 00:19:14.964 05:21:31 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.964 05:21:31 -- target/queue_depth.sh@33 -- # waitforlisten 1827529 /var/tmp/bdevperf.sock 00:19:14.964 05:21:31 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:14.964 05:21:31 -- common/autotest_common.sh@829 -- # '[' -z 1827529 ']' 00:19:14.964 05:21:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.964 05:21:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.964 05:21:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.964 05:21:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.964 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.964 [2024-11-19 05:21:31.397352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:14.964 [2024-11-19 05:21:31.397398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827529 ] 00:19:14.964 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.964 [2024-11-19 05:21:31.466564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.964 [2024-11-19 05:21:31.502960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.904 05:21:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.904 05:21:32 -- common/autotest_common.sh@862 -- # return 0 00:19:15.904 05:21:32 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:15.904 05:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.904 05:21:32 -- common/autotest_common.sh@10 -- # set +x 00:19:15.904 NVMe0n1 00:19:15.904 05:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.904 05:21:32 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.904 Running I/O for 10 seconds... 00:19:25.978 00:19:25.978 Latency(us) 00:19:25.978 [2024-11-19T04:21:42.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.978 [2024-11-19T04:21:42.536Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:25.978 Verification LBA range: start 0x0 length 0x4000 00:19:25.978 NVMe0n1 : 10.03 29568.49 115.50 0.00 0.00 34553.79 7549.75 28940.70 00:19:25.978 [2024-11-19T04:21:42.536Z] =================================================================================================================== 00:19:25.978 [2024-11-19T04:21:42.536Z] Total : 29568.49 115.50 0.00 0.00 34553.79 7549.75 28940.70 00:19:25.978 0 00:19:25.978 05:21:42 -- target/queue_depth.sh@39 -- # killprocess 1827529 00:19:25.978 05:21:42 -- common/autotest_common.sh@936 -- # '[' -z 1827529 ']' 00:19:25.978 05:21:42 -- common/autotest_common.sh@940 -- # kill -0 1827529 00:19:25.978 05:21:42 -- common/autotest_common.sh@941 -- # uname 00:19:25.978 05:21:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.978 05:21:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1827529 00:19:25.978 05:21:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:25.978 05:21:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:25.978 05:21:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1827529' 00:19:25.978 killing process with pid 1827529 00:19:25.978 05:21:42 -- common/autotest_common.sh@955 -- # kill 1827529 00:19:25.978 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.978 00:19:25.978 Latency(us) 00:19:25.978 [2024-11-19T04:21:42.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.978 [2024-11-19T04:21:42.536Z] =================================================================================================================== 00:19:25.978 [2024-11-19T04:21:42.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.978 05:21:42 -- common/autotest_common.sh@960 -- # wait 1827529 00:19:26.238 05:21:42 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:26.238 05:21:42 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:26.238 05:21:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:26.238 05:21:42 -- nvmf/common.sh@116 -- # sync 00:19:26.238 05:21:42 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:26.238 05:21:42 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:26.238 05:21:42 -- nvmf/common.sh@119 -- # set +e 00:19:26.238 05:21:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:26.238 05:21:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:26.238 rmmod nvme_rdma 00:19:26.238 rmmod nvme_fabrics 00:19:26.238 05:21:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:26.238 05:21:42 -- nvmf/common.sh@123 -- # set -e 00:19:26.238 05:21:42 -- nvmf/common.sh@124 -- # return 0 00:19:26.238 05:21:42 -- nvmf/common.sh@477 -- # '[' -n 1827247 ']' 00:19:26.238 05:21:42 -- nvmf/common.sh@478 -- # killprocess 1827247 00:19:26.238 05:21:42 -- common/autotest_common.sh@936 -- # '[' -z 1827247 ']' 00:19:26.238 05:21:42 -- common/autotest_common.sh@940 -- # kill -0 1827247 00:19:26.238 05:21:42 -- common/autotest_common.sh@941 -- # uname 00:19:26.238 05:21:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:26.238 05:21:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1827247 00:19:26.499 05:21:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:26.499 05:21:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:26.499 05:21:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1827247' 00:19:26.499 killing process with pid 1827247 00:19:26.499 05:21:42 -- common/autotest_common.sh@955 -- # kill 1827247 00:19:26.499 05:21:42 -- common/autotest_common.sh@960 -- # wait 1827247 00:19:26.499 05:21:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.499 05:21:43 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:26.499 00:19:26.499 real 0m19.840s 00:19:26.499 user 0m26.223s 00:19:26.499 sys 0m6.030s 00:19:26.499 05:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:26.499 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:26.499 ************************************ 00:19:26.499 END TEST nvmf_queue_depth 00:19:26.499 ************************************ 00:19:26.759 05:21:43 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:26.759 05:21:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:26.759 05:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:26.759 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:26.759 ************************************ 00:19:26.759 START TEST nvmf_multipath 00:19:26.759 ************************************ 00:19:26.759 05:21:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:26.759 * Looking for test storage... 00:19:26.759 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.759 05:21:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:26.759 05:21:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:26.759 05:21:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:26.759 05:21:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:26.759 05:21:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:26.759 05:21:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:26.759 05:21:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:26.759 05:21:43 -- scripts/common.sh@335 -- # IFS=.-: 00:19:26.759 05:21:43 -- scripts/common.sh@335 -- # read -ra ver1 00:19:26.759 05:21:43 -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.759 05:21:43 -- scripts/common.sh@336 -- # read -ra ver2 00:19:26.759 05:21:43 -- scripts/common.sh@337 -- # local 'op=<' 00:19:26.759 05:21:43 -- scripts/common.sh@339 -- # ver1_l=2 00:19:26.759 05:21:43 -- scripts/common.sh@340 -- # ver2_l=1 00:19:26.759 05:21:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:26.759 05:21:43 -- scripts/common.sh@343 -- # case "$op" in 00:19:26.759 05:21:43 -- scripts/common.sh@344 -- # : 1 00:19:26.759 05:21:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:26.759 05:21:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.759 05:21:43 -- scripts/common.sh@364 -- # decimal 1 00:19:26.759 05:21:43 -- scripts/common.sh@352 -- # local d=1 00:19:26.759 05:21:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.759 05:21:43 -- scripts/common.sh@354 -- # echo 1 00:19:26.759 05:21:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:26.759 05:21:43 -- scripts/common.sh@365 -- # decimal 2 00:19:26.759 05:21:43 -- scripts/common.sh@352 -- # local d=2 00:19:26.759 05:21:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.759 05:21:43 -- scripts/common.sh@354 -- # echo 2 00:19:26.759 05:21:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:26.759 05:21:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:26.759 05:21:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:26.759 05:21:43 -- scripts/common.sh@367 -- # return 0 00:19:26.759 05:21:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.759 05:21:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.759 --rc genhtml_branch_coverage=1 00:19:26.759 --rc genhtml_function_coverage=1 00:19:26.759 --rc genhtml_legend=1 00:19:26.759 --rc geninfo_all_blocks=1 00:19:26.759 --rc geninfo_unexecuted_blocks=1 00:19:26.759 00:19:26.759 ' 00:19:26.759 05:21:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.759 --rc genhtml_branch_coverage=1 00:19:26.759 --rc genhtml_function_coverage=1 00:19:26.759 --rc genhtml_legend=1 00:19:26.759 --rc geninfo_all_blocks=1 00:19:26.759 --rc geninfo_unexecuted_blocks=1 00:19:26.759 00:19:26.759 ' 00:19:26.759 05:21:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.759 --rc genhtml_branch_coverage=1 00:19:26.759 --rc genhtml_function_coverage=1 00:19:26.759 --rc genhtml_legend=1 00:19:26.759 --rc geninfo_all_blocks=1 00:19:26.759 --rc geninfo_unexecuted_blocks=1 00:19:26.759 00:19:26.759 ' 00:19:26.759 05:21:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.759 --rc genhtml_branch_coverage=1 00:19:26.759 --rc genhtml_function_coverage=1 00:19:26.759 --rc genhtml_legend=1 00:19:26.759 --rc geninfo_all_blocks=1 00:19:26.759 --rc geninfo_unexecuted_blocks=1 00:19:26.759 00:19:26.759 ' 00:19:26.759 05:21:43 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.759 05:21:43 -- nvmf/common.sh@7 -- # uname -s 00:19:26.759 05:21:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.759 05:21:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.759 05:21:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.759 05:21:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.759 05:21:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.759 05:21:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.759 05:21:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.759 05:21:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.759 05:21:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.759 05:21:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.759 05:21:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.759 05:21:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:26.759 05:21:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.759 05:21:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.759 05:21:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.759 05:21:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.759 05:21:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.759 05:21:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.759 05:21:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.759 05:21:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.759 05:21:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.759 05:21:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.759 05:21:43 -- paths/export.sh@5 -- # export PATH 00:19:26.759 05:21:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.759 05:21:43 -- nvmf/common.sh@46 -- # : 0 00:19:26.759 05:21:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:26.759 05:21:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:26.759 05:21:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:26.760 05:21:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.760 05:21:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.760 05:21:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:26.760 05:21:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:26.760 05:21:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:26.760 05:21:43 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.760 05:21:43 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.760 05:21:43 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:26.760 05:21:43 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:27.020 05:21:43 -- target/multipath.sh@43 -- # nvmftestinit 00:19:27.020 05:21:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:27.020 05:21:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.020 05:21:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:27.020 05:21:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:27.020 05:21:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:27.020 05:21:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.020 05:21:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.020 05:21:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.020 05:21:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:27.020 05:21:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:27.020 05:21:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:27.020 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:33.602 05:21:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:33.602 05:21:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:33.602 05:21:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:33.602 05:21:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:33.602 05:21:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:33.602 05:21:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:33.602 05:21:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:33.602 05:21:49 -- nvmf/common.sh@294 -- # net_devs=() 00:19:33.602 05:21:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:33.602 05:21:49 -- nvmf/common.sh@295 -- # e810=() 00:19:33.602 05:21:49 -- nvmf/common.sh@295 -- # local -ga e810 00:19:33.602 05:21:49 -- nvmf/common.sh@296 -- # x722=() 00:19:33.602 05:21:49 -- nvmf/common.sh@296 -- # local -ga x722 00:19:33.602 05:21:49 -- nvmf/common.sh@297 -- # mlx=() 00:19:33.602 05:21:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:33.602 05:21:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.602 05:21:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:33.602 05:21:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:33.602 05:21:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:33.602 05:21:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:33.602 05:21:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:33.602 05:21:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:33.602 05:21:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:33.602 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:33.602 05:21:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.602 05:21:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:33.602 05:21:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:33.602 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:33.602 05:21:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.602 05:21:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:33.602 05:21:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:33.602 05:21:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:33.602 05:21:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.602 05:21:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:33.602 05:21:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.602 05:21:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:33.602 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:33.602 05:21:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.602 05:21:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.603 05:21:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:33.603 05:21:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.603 05:21:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:33.603 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.603 05:21:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:33.603 05:21:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:33.603 05:21:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:33.603 05:21:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:33.603 05:21:49 -- nvmf/common.sh@57 -- # uname 00:19:33.603 05:21:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:33.603 05:21:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:33.603 05:21:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:33.603 05:21:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:33.603 05:21:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:33.603 05:21:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:33.603 05:21:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:33.603 05:21:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:33.603 05:21:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:33.603 05:21:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:33.603 05:21:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:33.603 05:21:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.603 05:21:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:33.603 05:21:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:33.603 05:21:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.603 05:21:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:33.603 05:21:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@104 -- # continue 2 00:19:33.603 05:21:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@104 -- # continue 2 00:19:33.603 05:21:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:33.603 05:21:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.603 05:21:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:33.603 05:21:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:33.603 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.603 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:33.603 altname enp217s0f0np0 00:19:33.603 altname ens818f0np0 00:19:33.603 inet 192.168.100.8/24 scope global mlx_0_0 00:19:33.603 valid_lft forever preferred_lft forever 00:19:33.603 05:21:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:33.603 05:21:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.603 05:21:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:33.603 05:21:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:33.603 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.603 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:33.603 altname enp217s0f1np1 00:19:33.603 altname ens818f1np1 00:19:33.603 inet 192.168.100.9/24 scope global mlx_0_1 00:19:33.603 valid_lft forever preferred_lft forever 00:19:33.603 05:21:49 -- nvmf/common.sh@410 -- # return 0 00:19:33.603 05:21:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:33.603 05:21:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:33.603 05:21:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:33.603 05:21:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:33.603 05:21:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.603 05:21:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:33.603 05:21:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:33.603 05:21:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.603 05:21:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:33.603 05:21:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@104 -- # continue 2 00:19:33.603 05:21:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.603 05:21:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.603 05:21:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@104 -- # continue 2 00:19:33.603 05:21:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:33.603 05:21:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.603 05:21:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:33.603 05:21:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.603 05:21:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.603 05:21:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:33.603 192.168.100.9' 00:19:33.603 05:21:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:33.603 192.168.100.9' 00:19:33.603 05:21:49 -- nvmf/common.sh@445 -- # head -n 1 00:19:33.603 05:21:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:33.603 05:21:49 -- nvmf/common.sh@446 -- # tail -n +2 00:19:33.603 05:21:49 -- nvmf/common.sh@446 -- # head -n 1 00:19:33.603 05:21:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:33.603 192.168.100.9' 00:19:33.603 05:21:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:33.603 05:21:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:33.603 05:21:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:33.603 05:21:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:33.603 05:21:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:33.603 05:21:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:33.603 05:21:49 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:33.603 05:21:49 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:33.603 05:21:49 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:33.603 run this test only with TCP transport for now 00:19:33.603 05:21:49 -- target/multipath.sh@53 -- # nvmftestfini 00:19:33.603 05:21:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:33.603 05:21:49 -- nvmf/common.sh@116 -- # sync 00:19:33.603 05:21:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:33.603 05:21:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:33.603 05:21:49 -- nvmf/common.sh@119 -- # set +e 00:19:33.603 05:21:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:33.604 05:21:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:33.604 rmmod nvme_rdma 00:19:33.604 rmmod nvme_fabrics 00:19:33.604 05:21:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:33.604 05:21:50 -- nvmf/common.sh@123 -- # set -e 00:19:33.604 05:21:50 -- nvmf/common.sh@124 -- # return 0 00:19:33.604 05:21:50 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:33.604 05:21:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:33.604 05:21:50 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:33.604 05:21:50 -- target/multipath.sh@54 -- # exit 0 00:19:33.604 05:21:50 -- target/multipath.sh@1 -- # nvmftestfini 00:19:33.604 05:21:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:33.604 05:21:50 -- nvmf/common.sh@116 -- # sync 00:19:33.604 05:21:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:33.604 05:21:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:33.604 05:21:50 -- nvmf/common.sh@119 -- # set +e 00:19:33.604 05:21:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:33.604 05:21:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:33.604 05:21:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:33.604 05:21:50 -- nvmf/common.sh@123 -- # set -e 00:19:33.604 05:21:50 -- nvmf/common.sh@124 -- # return 0 00:19:33.604 05:21:50 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:33.604 05:21:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:33.604 05:21:50 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:33.604 00:19:33.604 real 0m6.934s 00:19:33.604 user 0m2.058s 00:19:33.604 sys 0m5.092s 00:19:33.604 05:21:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:33.604 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 ************************************ 00:19:33.604 END TEST nvmf_multipath 00:19:33.604 ************************************ 00:19:33.604 05:21:50 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:33.604 05:21:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:33.604 05:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.604 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 ************************************ 00:19:33.604 START TEST nvmf_zcopy 00:19:33.604 ************************************ 00:19:33.604 05:21:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:33.864 * Looking for test storage... 00:19:33.864 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:33.864 05:21:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:33.864 05:21:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:33.865 05:21:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:33.865 05:21:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:33.865 05:21:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:33.865 05:21:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:33.865 05:21:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:33.865 05:21:50 -- scripts/common.sh@335 -- # IFS=.-: 00:19:33.865 05:21:50 -- scripts/common.sh@335 -- # read -ra ver1 00:19:33.865 05:21:50 -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.865 05:21:50 -- scripts/common.sh@336 -- # read -ra ver2 00:19:33.865 05:21:50 -- scripts/common.sh@337 -- # local 'op=<' 00:19:33.865 05:21:50 -- scripts/common.sh@339 -- # ver1_l=2 00:19:33.865 05:21:50 -- scripts/common.sh@340 -- # ver2_l=1 00:19:33.865 05:21:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:33.865 05:21:50 -- scripts/common.sh@343 -- # case "$op" in 00:19:33.865 05:21:50 -- scripts/common.sh@344 -- # : 1 00:19:33.865 05:21:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:33.865 05:21:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.865 05:21:50 -- scripts/common.sh@364 -- # decimal 1 00:19:33.865 05:21:50 -- scripts/common.sh@352 -- # local d=1 00:19:33.865 05:21:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.865 05:21:50 -- scripts/common.sh@354 -- # echo 1 00:19:33.865 05:21:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:33.865 05:21:50 -- scripts/common.sh@365 -- # decimal 2 00:19:33.865 05:21:50 -- scripts/common.sh@352 -- # local d=2 00:19:33.865 05:21:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.865 05:21:50 -- scripts/common.sh@354 -- # echo 2 00:19:33.865 05:21:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:33.865 05:21:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:33.865 05:21:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:33.865 05:21:50 -- scripts/common.sh@367 -- # return 0 00:19:33.865 05:21:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.865 05:21:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:33.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.865 --rc genhtml_branch_coverage=1 00:19:33.865 --rc genhtml_function_coverage=1 00:19:33.865 --rc genhtml_legend=1 00:19:33.865 --rc geninfo_all_blocks=1 00:19:33.865 --rc geninfo_unexecuted_blocks=1 00:19:33.865 00:19:33.865 ' 00:19:33.865 05:21:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:33.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.865 --rc genhtml_branch_coverage=1 00:19:33.865 --rc genhtml_function_coverage=1 00:19:33.865 --rc genhtml_legend=1 00:19:33.865 --rc geninfo_all_blocks=1 00:19:33.865 --rc geninfo_unexecuted_blocks=1 00:19:33.865 00:19:33.865 ' 00:19:33.865 05:21:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:33.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.865 --rc genhtml_branch_coverage=1 00:19:33.865 --rc genhtml_function_coverage=1 00:19:33.865 --rc genhtml_legend=1 00:19:33.865 --rc geninfo_all_blocks=1 00:19:33.865 --rc geninfo_unexecuted_blocks=1 00:19:33.865 00:19:33.865 ' 00:19:33.865 05:21:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:33.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.865 --rc genhtml_branch_coverage=1 00:19:33.865 --rc genhtml_function_coverage=1 00:19:33.865 --rc genhtml_legend=1 00:19:33.865 --rc geninfo_all_blocks=1 00:19:33.865 --rc geninfo_unexecuted_blocks=1 00:19:33.865 00:19:33.865 ' 00:19:33.865 05:21:50 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.865 05:21:50 -- nvmf/common.sh@7 -- # uname -s 00:19:33.865 05:21:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.865 05:21:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.865 05:21:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.865 05:21:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.865 05:21:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.865 05:21:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.865 05:21:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.865 05:21:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.865 05:21:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.865 05:21:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.865 05:21:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:33.865 05:21:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:33.865 05:21:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.865 05:21:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.865 05:21:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.865 05:21:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:33.865 05:21:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.865 05:21:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.865 05:21:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.865 05:21:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.865 05:21:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.865 05:21:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.865 05:21:50 -- paths/export.sh@5 -- # export PATH 00:19:33.865 05:21:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.865 05:21:50 -- nvmf/common.sh@46 -- # : 0 00:19:33.865 05:21:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:33.865 05:21:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:33.865 05:21:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:33.865 05:21:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.865 05:21:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.865 05:21:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:33.865 05:21:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:33.865 05:21:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:33.865 05:21:50 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:33.865 05:21:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:33.865 05:21:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.865 05:21:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:33.865 05:21:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:33.865 05:21:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:33.865 05:21:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.865 05:21:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.865 05:21:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.865 05:21:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:33.865 05:21:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:33.865 05:21:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:33.865 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:40.444 05:21:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:40.444 05:21:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:40.444 05:21:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:40.444 05:21:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:40.444 05:21:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:40.444 05:21:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:40.444 05:21:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:40.444 05:21:56 -- nvmf/common.sh@294 -- # net_devs=() 00:19:40.444 05:21:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:40.444 05:21:56 -- nvmf/common.sh@295 -- # e810=() 00:19:40.444 05:21:56 -- nvmf/common.sh@295 -- # local -ga e810 00:19:40.444 05:21:56 -- nvmf/common.sh@296 -- # x722=() 00:19:40.444 05:21:56 -- nvmf/common.sh@296 -- # local -ga x722 00:19:40.444 05:21:56 -- nvmf/common.sh@297 -- # mlx=() 00:19:40.444 05:21:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:40.444 05:21:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.444 05:21:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.444 05:21:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.444 05:21:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.444 05:21:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.444 05:21:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.445 05:21:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.445 05:21:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.445 05:21:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.445 05:21:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.445 05:21:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.445 05:21:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:40.445 05:21:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:40.445 05:21:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:40.445 05:21:56 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:40.445 05:21:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:40.445 05:21:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:40.445 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:40.445 05:21:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.445 05:21:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:40.445 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:40.445 05:21:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.445 05:21:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:40.445 05:21:56 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.445 05:21:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:40.445 05:21:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.445 05:21:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:40.445 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.445 05:21:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.445 05:21:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:40.445 05:21:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.445 05:21:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:40.445 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.445 05:21:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:40.445 05:21:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:40.445 05:21:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:40.445 05:21:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:40.445 05:21:56 -- nvmf/common.sh@57 -- # uname 00:19:40.445 05:21:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:40.445 05:21:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:40.445 05:21:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:40.445 05:21:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:40.445 05:21:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:40.445 05:21:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:40.445 05:21:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:40.445 05:21:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:40.445 05:21:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:40.445 05:21:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:40.445 05:21:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:40.445 05:21:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.445 05:21:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:40.445 05:21:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:40.445 05:21:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.445 05:21:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:40.445 05:21:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@104 -- # continue 2 00:19:40.445 05:21:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@104 -- # continue 2 00:19:40.445 05:21:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:40.445 05:21:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.445 05:21:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:40.445 05:21:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:40.445 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.445 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:40.445 altname enp217s0f0np0 00:19:40.445 altname ens818f0np0 00:19:40.445 inet 192.168.100.8/24 scope global mlx_0_0 00:19:40.445 valid_lft forever preferred_lft forever 00:19:40.445 05:21:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:40.445 05:21:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.445 05:21:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:40.445 05:21:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:40.445 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.445 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:40.445 altname enp217s0f1np1 00:19:40.445 altname ens818f1np1 00:19:40.445 inet 192.168.100.9/24 scope global mlx_0_1 00:19:40.445 valid_lft forever preferred_lft forever 00:19:40.445 05:21:56 -- nvmf/common.sh@410 -- # return 0 00:19:40.445 05:21:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:40.445 05:21:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:40.445 05:21:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:40.445 05:21:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:40.445 05:21:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.445 05:21:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:40.445 05:21:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:40.445 05:21:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.445 05:21:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:40.445 05:21:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@104 -- # continue 2 00:19:40.445 05:21:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.445 05:21:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.445 05:21:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@104 -- # continue 2 00:19:40.445 05:21:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:40.445 05:21:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.445 05:21:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:40.445 05:21:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.445 05:21:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.445 05:21:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:40.445 192.168.100.9' 00:19:40.445 05:21:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:40.445 192.168.100.9' 00:19:40.445 05:21:56 -- nvmf/common.sh@445 -- # head -n 1 00:19:40.445 05:21:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:40.445 05:21:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:40.445 192.168.100.9' 00:19:40.445 05:21:56 -- nvmf/common.sh@446 -- # tail -n +2 00:19:40.446 05:21:56 -- nvmf/common.sh@446 -- # head -n 1 00:19:40.446 05:21:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:40.446 05:21:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:40.446 05:21:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:40.446 05:21:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:40.446 05:21:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:40.446 05:21:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:40.446 05:21:56 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:40.446 05:21:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:40.446 05:21:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.446 05:21:56 -- common/autotest_common.sh@10 -- # set +x 00:19:40.446 05:21:56 -- nvmf/common.sh@469 -- # nvmfpid=1836059 00:19:40.446 05:21:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:40.446 05:21:56 -- nvmf/common.sh@470 -- # waitforlisten 1836059 00:19:40.446 05:21:56 -- common/autotest_common.sh@829 -- # '[' -z 1836059 ']' 00:19:40.446 05:21:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.446 05:21:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.446 05:21:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.446 05:21:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.446 05:21:56 -- common/autotest_common.sh@10 -- # set +x 00:19:40.706 [2024-11-19 05:21:57.033525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:40.706 [2024-11-19 05:21:57.033581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.706 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.706 [2024-11-19 05:21:57.103743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.706 [2024-11-19 05:21:57.140103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:40.706 [2024-11-19 05:21:57.140210] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.706 [2024-11-19 05:21:57.140220] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.706 [2024-11-19 05:21:57.140228] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.706 [2024-11-19 05:21:57.140253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.646 05:21:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.646 05:21:57 -- common/autotest_common.sh@862 -- # return 0 00:19:41.646 05:21:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:41.646 05:21:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.646 05:21:57 -- common/autotest_common.sh@10 -- # set +x 00:19:41.646 05:21:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.646 05:21:57 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:41.646 05:21:57 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:41.646 Unsupported transport: rdma 00:19:41.646 05:21:57 -- target/zcopy.sh@17 -- # exit 0 00:19:41.646 05:21:57 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:41.646 05:21:57 -- common/autotest_common.sh@806 -- # type=--id 00:19:41.646 05:21:57 -- common/autotest_common.sh@807 -- # id=0 00:19:41.646 05:21:57 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:41.646 05:21:57 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:41.646 05:21:57 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:41.646 05:21:57 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:41.646 05:21:57 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:41.646 05:21:57 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:41.646 nvmf_trace.0 00:19:41.646 05:21:57 -- common/autotest_common.sh@821 -- # return 0 00:19:41.646 05:21:57 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:41.646 05:21:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.646 05:21:57 -- nvmf/common.sh@116 -- # sync 00:19:41.646 05:21:57 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:41.646 05:21:57 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:41.646 05:21:57 -- nvmf/common.sh@119 -- # set +e 00:19:41.646 05:21:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.646 05:21:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:41.646 rmmod nvme_rdma 00:19:41.646 rmmod nvme_fabrics 00:19:41.646 05:21:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.646 05:21:57 -- nvmf/common.sh@123 -- # set -e 00:19:41.646 05:21:57 -- nvmf/common.sh@124 -- # return 0 00:19:41.646 05:21:57 -- nvmf/common.sh@477 -- # '[' -n 1836059 ']' 00:19:41.646 05:21:57 -- nvmf/common.sh@478 -- # killprocess 1836059 00:19:41.646 05:21:57 -- common/autotest_common.sh@936 -- # '[' -z 1836059 ']' 00:19:41.646 05:21:57 -- common/autotest_common.sh@940 -- # kill -0 1836059 00:19:41.646 05:21:57 -- common/autotest_common.sh@941 -- # uname 00:19:41.646 05:21:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.646 05:21:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1836059 00:19:41.646 05:21:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:41.646 05:21:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:41.646 05:21:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1836059' 00:19:41.646 killing process with pid 1836059 00:19:41.646 05:21:58 -- common/autotest_common.sh@955 -- # kill 1836059 00:19:41.646 05:21:58 -- common/autotest_common.sh@960 -- # wait 1836059 00:19:41.907 05:21:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.907 05:21:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:41.907 00:19:41.907 real 0m8.138s 00:19:41.907 user 0m3.506s 00:19:41.907 sys 0m5.393s 00:19:41.907 05:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:41.907 05:21:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.907 ************************************ 00:19:41.907 END TEST nvmf_zcopy 00:19:41.907 ************************************ 00:19:41.907 05:21:58 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:41.907 05:21:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:41.907 05:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:41.907 05:21:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.907 ************************************ 00:19:41.907 START TEST nvmf_nmic 00:19:41.907 ************************************ 00:19:41.907 05:21:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:41.907 * Looking for test storage... 00:19:41.907 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:41.907 05:21:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:41.907 05:21:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:41.907 05:21:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:41.907 05:21:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:41.907 05:21:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:41.907 05:21:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:41.907 05:21:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:41.907 05:21:58 -- scripts/common.sh@335 -- # IFS=.-: 00:19:41.907 05:21:58 -- scripts/common.sh@335 -- # read -ra ver1 00:19:41.907 05:21:58 -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.907 05:21:58 -- scripts/common.sh@336 -- # read -ra ver2 00:19:41.907 05:21:58 -- scripts/common.sh@337 -- # local 'op=<' 00:19:41.907 05:21:58 -- scripts/common.sh@339 -- # ver1_l=2 00:19:41.907 05:21:58 -- scripts/common.sh@340 -- # ver2_l=1 00:19:41.907 05:21:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:41.907 05:21:58 -- scripts/common.sh@343 -- # case "$op" in 00:19:41.907 05:21:58 -- scripts/common.sh@344 -- # : 1 00:19:41.907 05:21:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:41.907 05:21:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.907 05:21:58 -- scripts/common.sh@364 -- # decimal 1 00:19:41.907 05:21:58 -- scripts/common.sh@352 -- # local d=1 00:19:41.907 05:21:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.907 05:21:58 -- scripts/common.sh@354 -- # echo 1 00:19:41.907 05:21:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:41.907 05:21:58 -- scripts/common.sh@365 -- # decimal 2 00:19:41.907 05:21:58 -- scripts/common.sh@352 -- # local d=2 00:19:41.907 05:21:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.907 05:21:58 -- scripts/common.sh@354 -- # echo 2 00:19:41.907 05:21:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:41.907 05:21:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:41.907 05:21:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:41.907 05:21:58 -- scripts/common.sh@367 -- # return 0 00:19:41.907 05:21:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.907 05:21:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:41.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.907 --rc genhtml_branch_coverage=1 00:19:41.907 --rc genhtml_function_coverage=1 00:19:41.907 --rc genhtml_legend=1 00:19:41.907 --rc geninfo_all_blocks=1 00:19:41.907 --rc geninfo_unexecuted_blocks=1 00:19:41.907 00:19:41.907 ' 00:19:41.907 05:21:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:41.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.907 --rc genhtml_branch_coverage=1 00:19:41.907 --rc genhtml_function_coverage=1 00:19:41.907 --rc genhtml_legend=1 00:19:41.907 --rc geninfo_all_blocks=1 00:19:41.907 --rc geninfo_unexecuted_blocks=1 00:19:41.907 00:19:41.907 ' 00:19:41.907 05:21:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:41.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.907 --rc genhtml_branch_coverage=1 00:19:41.907 --rc genhtml_function_coverage=1 00:19:41.907 --rc genhtml_legend=1 00:19:41.907 --rc geninfo_all_blocks=1 00:19:41.907 --rc geninfo_unexecuted_blocks=1 00:19:41.907 00:19:41.907 ' 00:19:41.907 05:21:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:41.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.907 --rc genhtml_branch_coverage=1 00:19:41.907 --rc genhtml_function_coverage=1 00:19:41.907 --rc genhtml_legend=1 00:19:41.907 --rc geninfo_all_blocks=1 00:19:41.907 --rc geninfo_unexecuted_blocks=1 00:19:41.907 00:19:41.907 ' 00:19:41.907 05:21:58 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.907 05:21:58 -- nvmf/common.sh@7 -- # uname -s 00:19:41.907 05:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.907 05:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.907 05:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.907 05:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.907 05:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.907 05:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.907 05:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.907 05:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.907 05:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.907 05:21:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.167 05:21:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:42.167 05:21:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:42.167 05:21:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.167 05:21:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.167 05:21:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.167 05:21:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:42.167 05:21:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.167 05:21:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.167 05:21:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.167 05:21:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.167 05:21:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.168 05:21:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.168 05:21:58 -- paths/export.sh@5 -- # export PATH 00:19:42.168 05:21:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.168 05:21:58 -- nvmf/common.sh@46 -- # : 0 00:19:42.168 05:21:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.168 05:21:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.168 05:21:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.168 05:21:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.168 05:21:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.168 05:21:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.168 05:21:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.168 05:21:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.168 05:21:58 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.168 05:21:58 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.168 05:21:58 -- target/nmic.sh@14 -- # nvmftestinit 00:19:42.168 05:21:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:42.168 05:21:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.168 05:21:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.168 05:21:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.168 05:21:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.168 05:21:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.168 05:21:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.168 05:21:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.168 05:21:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:42.168 05:21:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:42.168 05:21:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:42.168 05:21:58 -- common/autotest_common.sh@10 -- # set +x 00:19:48.755 05:22:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:48.755 05:22:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:48.755 05:22:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:48.755 05:22:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:48.755 05:22:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:48.755 05:22:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:48.755 05:22:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:48.755 05:22:05 -- nvmf/common.sh@294 -- # net_devs=() 00:19:48.755 05:22:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:48.755 05:22:05 -- nvmf/common.sh@295 -- # e810=() 00:19:48.755 05:22:05 -- nvmf/common.sh@295 -- # local -ga e810 00:19:48.755 05:22:05 -- nvmf/common.sh@296 -- # x722=() 00:19:48.755 05:22:05 -- nvmf/common.sh@296 -- # local -ga x722 00:19:48.755 05:22:05 -- nvmf/common.sh@297 -- # mlx=() 00:19:48.755 05:22:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:48.755 05:22:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.755 05:22:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:48.755 05:22:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:48.755 05:22:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:48.755 05:22:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:48.755 05:22:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:48.755 05:22:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:48.755 05:22:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:48.755 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:48.755 05:22:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.755 05:22:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:48.755 05:22:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:48.755 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:48.755 05:22:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.755 05:22:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:48.755 05:22:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:48.755 05:22:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.755 05:22:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:48.755 05:22:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.755 05:22:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:48.755 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:48.755 05:22:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.755 05:22:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:48.755 05:22:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.755 05:22:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:48.755 05:22:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.755 05:22:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:48.755 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:48.755 05:22:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.755 05:22:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:48.755 05:22:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:48.755 05:22:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:48.755 05:22:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:48.755 05:22:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:48.755 05:22:05 -- nvmf/common.sh@57 -- # uname 00:19:48.755 05:22:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:48.755 05:22:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:48.755 05:22:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:48.755 05:22:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:48.755 05:22:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:48.755 05:22:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:48.755 05:22:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:48.755 05:22:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:48.755 05:22:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:48.755 05:22:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:48.755 05:22:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:48.755 05:22:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.755 05:22:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:48.755 05:22:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:48.756 05:22:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.756 05:22:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:48.756 05:22:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:48.756 05:22:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.756 05:22:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.756 05:22:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:48.756 05:22:05 -- nvmf/common.sh@104 -- # continue 2 00:19:48.756 05:22:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:48.756 05:22:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.756 05:22:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.756 05:22:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.756 05:22:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.756 05:22:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:48.756 05:22:05 -- nvmf/common.sh@104 -- # continue 2 00:19:48.756 05:22:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:48.756 05:22:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:48.756 05:22:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:48.756 05:22:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:48.756 05:22:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:48.756 05:22:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:48.756 05:22:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:48.756 05:22:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:48.756 05:22:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:48.756 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.756 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:48.756 altname enp217s0f0np0 00:19:48.756 altname ens818f0np0 00:19:48.756 inet 192.168.100.8/24 scope global mlx_0_0 00:19:48.756 valid_lft forever preferred_lft forever 00:19:48.756 05:22:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:48.756 05:22:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:48.756 05:22:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:48.756 05:22:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:48.756 05:22:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:48.756 05:22:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:48.756 05:22:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:48.756 05:22:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:48.756 05:22:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:48.756 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.756 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:48.756 altname enp217s0f1np1 00:19:48.756 altname ens818f1np1 00:19:48.756 inet 192.168.100.9/24 scope global mlx_0_1 00:19:48.756 valid_lft forever preferred_lft forever 00:19:48.756 05:22:05 -- nvmf/common.sh@410 -- # return 0 00:19:48.756 05:22:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:48.756 05:22:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:48.756 05:22:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:48.756 05:22:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:48.756 05:22:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:48.756 05:22:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.756 05:22:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:48.756 05:22:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:48.756 05:22:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:49.016 05:22:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:49.016 05:22:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.016 05:22:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.016 05:22:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:49.016 05:22:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:49.016 05:22:05 -- nvmf/common.sh@104 -- # continue 2 00:19:49.016 05:22:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.016 05:22:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.016 05:22:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:49.016 05:22:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.016 05:22:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:49.016 05:22:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:49.016 05:22:05 -- nvmf/common.sh@104 -- # continue 2 00:19:49.016 05:22:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:49.016 05:22:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:49.016 05:22:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:49.016 05:22:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.016 05:22:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:49.016 05:22:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.016 05:22:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:49.016 05:22:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:49.016 05:22:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:49.016 05:22:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:49.016 05:22:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.016 05:22:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.016 05:22:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:49.016 192.168.100.9' 00:19:49.016 05:22:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:49.016 192.168.100.9' 00:19:49.016 05:22:05 -- nvmf/common.sh@445 -- # head -n 1 00:19:49.016 05:22:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:49.016 05:22:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:49.016 192.168.100.9' 00:19:49.016 05:22:05 -- nvmf/common.sh@446 -- # tail -n +2 00:19:49.016 05:22:05 -- nvmf/common.sh@446 -- # head -n 1 00:19:49.016 05:22:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:49.016 05:22:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:49.016 05:22:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:49.016 05:22:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:49.016 05:22:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:49.016 05:22:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:49.016 05:22:05 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:49.016 05:22:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.017 05:22:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.017 05:22:05 -- common/autotest_common.sh@10 -- # set +x 00:19:49.017 05:22:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.017 05:22:05 -- nvmf/common.sh@469 -- # nvmfpid=1839593 00:19:49.017 05:22:05 -- nvmf/common.sh@470 -- # waitforlisten 1839593 00:19:49.017 05:22:05 -- common/autotest_common.sh@829 -- # '[' -z 1839593 ']' 00:19:49.017 05:22:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.017 05:22:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.017 05:22:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.017 05:22:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.017 05:22:05 -- common/autotest_common.sh@10 -- # set +x 00:19:49.017 [2024-11-19 05:22:05.451931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:49.017 [2024-11-19 05:22:05.451983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.017 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.017 [2024-11-19 05:22:05.519902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.017 [2024-11-19 05:22:05.559852] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.017 [2024-11-19 05:22:05.559958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.017 [2024-11-19 05:22:05.559968] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.017 [2024-11-19 05:22:05.559977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.017 [2024-11-19 05:22:05.560022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.017 [2024-11-19 05:22:05.560139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.017 [2024-11-19 05:22:05.560212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.017 [2024-11-19 05:22:05.560214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.957 05:22:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.957 05:22:06 -- common/autotest_common.sh@862 -- # return 0 00:19:49.957 05:22:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.957 05:22:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.957 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.957 05:22:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.957 05:22:06 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:49.957 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.957 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.957 [2024-11-19 05:22:06.361901] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb72200/0xb766f0) succeed. 00:19:49.957 [2024-11-19 05:22:06.371097] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb737f0/0xbb7d90) succeed. 00:19:49.957 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.957 05:22:06 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:49.957 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.957 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.957 Malloc0 00:19:49.957 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.957 05:22:06 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:49.957 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.957 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.217 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.218 05:22:06 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.218 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.218 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.218 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.218 05:22:06 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:50.218 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.218 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.218 [2024-11-19 05:22:06.540274] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:50.218 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.218 05:22:06 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:50.218 test case1: single bdev can't be used in multiple subsystems 00:19:50.218 05:22:06 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:50.218 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.218 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.218 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.218 05:22:06 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:50.218 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.218 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.218 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.218 05:22:06 -- target/nmic.sh@28 -- # nmic_status=0 00:19:50.218 05:22:06 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:50.218 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.218 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.218 [2024-11-19 05:22:06.564086] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:50.218 [2024-11-19 05:22:06.564105] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:50.218 [2024-11-19 05:22:06.564115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:50.218 request: 00:19:50.218 { 00:19:50.218 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.218 "namespace": { 00:19:50.218 "bdev_name": "Malloc0" 00:19:50.218 }, 00:19:50.218 "method": "nvmf_subsystem_add_ns", 00:19:50.218 "req_id": 1 00:19:50.218 } 00:19:50.218 Got JSON-RPC error response 00:19:50.218 response: 00:19:50.218 { 00:19:50.218 "code": -32602, 00:19:50.218 "message": "Invalid parameters" 00:19:50.218 } 00:19:50.218 05:22:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:50.218 05:22:06 -- target/nmic.sh@29 -- # nmic_status=1 00:19:50.218 05:22:06 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:50.218 05:22:06 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:50.218 Adding namespace failed - expected result. 00:19:50.218 05:22:06 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:50.218 test case2: host connect to nvmf target in multiple paths 00:19:50.218 05:22:06 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:50.218 05:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.218 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.218 [2024-11-19 05:22:06.576146] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:50.218 05:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.218 05:22:06 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:51.160 05:22:07 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:52.099 05:22:08 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:52.099 05:22:08 -- common/autotest_common.sh@1187 -- # local i=0 00:19:52.099 05:22:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:52.099 05:22:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:52.099 05:22:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:54.006 05:22:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:54.006 05:22:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:54.006 05:22:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:54.006 05:22:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:54.006 05:22:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:54.006 05:22:10 -- common/autotest_common.sh@1197 -- # return 0 00:19:54.006 05:22:10 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:54.289 [global] 00:19:54.289 thread=1 00:19:54.289 invalidate=1 00:19:54.289 rw=write 00:19:54.289 time_based=1 00:19:54.289 runtime=1 00:19:54.289 ioengine=libaio 00:19:54.289 direct=1 00:19:54.289 bs=4096 00:19:54.289 iodepth=1 00:19:54.289 norandommap=0 00:19:54.289 numjobs=1 00:19:54.289 00:19:54.289 verify_dump=1 00:19:54.289 verify_backlog=512 00:19:54.289 verify_state_save=0 00:19:54.289 do_verify=1 00:19:54.289 verify=crc32c-intel 00:19:54.289 [job0] 00:19:54.289 filename=/dev/nvme0n1 00:19:54.289 Could not set queue depth (nvme0n1) 00:19:54.552 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:54.552 fio-3.35 00:19:54.552 Starting 1 thread 00:19:55.935 00:19:55.935 job0: (groupid=0, jobs=1): err= 0: pid=1840772: Tue Nov 19 05:22:12 2024 00:19:55.935 read: IOPS=6798, BW=26.6MiB/s (27.8MB/s)(26.6MiB/1001msec) 00:19:55.935 slat (nsec): min=8312, max=31754, avg=8759.94, stdev=894.33 00:19:55.935 clat (usec): min=42, max=118, avg=59.83, stdev= 4.94 00:19:55.935 lat (usec): min=58, max=127, avg=68.59, stdev= 5.01 00:19:55.935 clat percentiles (usec): 00:19:55.935 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:19:55.935 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:19:55.935 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 67], 95.00th=[ 70], 00:19:55.935 | 99.00th=[ 76], 99.50th=[ 79], 99.90th=[ 85], 99.95th=[ 89], 00:19:55.935 | 99.99th=[ 119] 00:19:55.935 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:19:55.935 slat (nsec): min=10711, max=49379, avg=11432.15, stdev=1124.96 00:19:55.935 clat (usec): min=40, max=105, avg=57.97, stdev= 5.41 00:19:55.935 lat (usec): min=58, max=154, avg=69.40, stdev= 5.54 00:19:55.935 clat percentiles (usec): 00:19:55.935 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 54], 00:19:55.935 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:19:55.935 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 65], 95.00th=[ 69], 00:19:55.935 | 99.00th=[ 77], 99.50th=[ 79], 99.90th=[ 85], 99.95th=[ 90], 00:19:55.935 | 99.99th=[ 106] 00:19:55.935 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:19:55.935 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:19:55.935 lat (usec) : 50=0.87%, 100=99.12%, 250=0.01% 00:19:55.935 cpu : usr=12.10%, sys=17.20%, ctx=13974, majf=0, minf=1 00:19:55.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.935 issued rwts: total=6805,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.935 00:19:55.935 Run status group 0 (all jobs): 00:19:55.935 READ: bw=26.6MiB/s (27.8MB/s), 26.6MiB/s-26.6MiB/s (27.8MB/s-27.8MB/s), io=26.6MiB (27.9MB), run=1001-1001msec 00:19:55.935 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:55.935 00:19:55.935 Disk stats (read/write): 00:19:55.935 nvme0n1: ios=6193/6360, merge=0/0, ticks=336/316, in_queue=652, util=90.67% 00:19:55.935 05:22:12 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:57.843 05:22:14 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.843 05:22:14 -- common/autotest_common.sh@1208 -- # local i=0 00:19:57.843 05:22:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:57.843 05:22:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.843 05:22:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:57.843 05:22:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.843 05:22:14 -- common/autotest_common.sh@1220 -- # return 0 00:19:57.843 05:22:14 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:57.843 05:22:14 -- target/nmic.sh@53 -- # nvmftestfini 00:19:57.843 05:22:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:57.843 05:22:14 -- nvmf/common.sh@116 -- # sync 00:19:57.843 05:22:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:57.843 05:22:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:57.843 05:22:14 -- nvmf/common.sh@119 -- # set +e 00:19:57.843 05:22:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:57.843 05:22:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:57.843 rmmod nvme_rdma 00:19:57.843 rmmod nvme_fabrics 00:19:57.843 05:22:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:57.843 05:22:14 -- nvmf/common.sh@123 -- # set -e 00:19:57.843 05:22:14 -- nvmf/common.sh@124 -- # return 0 00:19:57.843 05:22:14 -- nvmf/common.sh@477 -- # '[' -n 1839593 ']' 00:19:57.843 05:22:14 -- nvmf/common.sh@478 -- # killprocess 1839593 00:19:57.843 05:22:14 -- common/autotest_common.sh@936 -- # '[' -z 1839593 ']' 00:19:57.843 05:22:14 -- common/autotest_common.sh@940 -- # kill -0 1839593 00:19:57.843 05:22:14 -- common/autotest_common.sh@941 -- # uname 00:19:57.843 05:22:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:57.843 05:22:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1839593 00:19:57.843 05:22:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:57.843 05:22:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:57.843 05:22:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1839593' 00:19:57.843 killing process with pid 1839593 00:19:57.843 05:22:14 -- common/autotest_common.sh@955 -- # kill 1839593 00:19:57.843 05:22:14 -- common/autotest_common.sh@960 -- # wait 1839593 00:19:58.103 05:22:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.103 05:22:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:58.103 00:19:58.103 real 0m16.182s 00:19:58.103 user 0m45.741s 00:19:58.103 sys 0m6.257s 00:19:58.103 05:22:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:58.103 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:19:58.103 ************************************ 00:19:58.103 END TEST nvmf_nmic 00:19:58.103 ************************************ 00:19:58.103 05:22:14 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:58.103 05:22:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:58.103 05:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.103 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:19:58.103 ************************************ 00:19:58.103 START TEST nvmf_fio_target 00:19:58.103 ************************************ 00:19:58.103 05:22:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:58.103 * Looking for test storage... 00:19:58.103 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:58.103 05:22:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:58.103 05:22:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:58.103 05:22:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:58.103 05:22:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:58.103 05:22:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:58.103 05:22:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:58.103 05:22:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:58.103 05:22:14 -- scripts/common.sh@335 -- # IFS=.-: 00:19:58.103 05:22:14 -- scripts/common.sh@335 -- # read -ra ver1 00:19:58.103 05:22:14 -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.103 05:22:14 -- scripts/common.sh@336 -- # read -ra ver2 00:19:58.103 05:22:14 -- scripts/common.sh@337 -- # local 'op=<' 00:19:58.103 05:22:14 -- scripts/common.sh@339 -- # ver1_l=2 00:19:58.103 05:22:14 -- scripts/common.sh@340 -- # ver2_l=1 00:19:58.103 05:22:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:58.103 05:22:14 -- scripts/common.sh@343 -- # case "$op" in 00:19:58.103 05:22:14 -- scripts/common.sh@344 -- # : 1 00:19:58.103 05:22:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:58.103 05:22:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.103 05:22:14 -- scripts/common.sh@364 -- # decimal 1 00:19:58.103 05:22:14 -- scripts/common.sh@352 -- # local d=1 00:19:58.103 05:22:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.103 05:22:14 -- scripts/common.sh@354 -- # echo 1 00:19:58.103 05:22:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:58.103 05:22:14 -- scripts/common.sh@365 -- # decimal 2 00:19:58.103 05:22:14 -- scripts/common.sh@352 -- # local d=2 00:19:58.103 05:22:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.103 05:22:14 -- scripts/common.sh@354 -- # echo 2 00:19:58.103 05:22:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:58.103 05:22:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:58.103 05:22:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:58.103 05:22:14 -- scripts/common.sh@367 -- # return 0 00:19:58.103 05:22:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.103 05:22:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.103 --rc genhtml_branch_coverage=1 00:19:58.103 --rc genhtml_function_coverage=1 00:19:58.103 --rc genhtml_legend=1 00:19:58.103 --rc geninfo_all_blocks=1 00:19:58.103 --rc geninfo_unexecuted_blocks=1 00:19:58.103 00:19:58.103 ' 00:19:58.103 05:22:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.103 --rc genhtml_branch_coverage=1 00:19:58.103 --rc genhtml_function_coverage=1 00:19:58.103 --rc genhtml_legend=1 00:19:58.103 --rc geninfo_all_blocks=1 00:19:58.103 --rc geninfo_unexecuted_blocks=1 00:19:58.103 00:19:58.103 ' 00:19:58.103 05:22:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.103 --rc genhtml_branch_coverage=1 00:19:58.103 --rc genhtml_function_coverage=1 00:19:58.103 --rc genhtml_legend=1 00:19:58.103 --rc geninfo_all_blocks=1 00:19:58.103 --rc geninfo_unexecuted_blocks=1 00:19:58.103 00:19:58.103 ' 00:19:58.103 05:22:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.103 --rc genhtml_branch_coverage=1 00:19:58.103 --rc genhtml_function_coverage=1 00:19:58.103 --rc genhtml_legend=1 00:19:58.103 --rc geninfo_all_blocks=1 00:19:58.103 --rc geninfo_unexecuted_blocks=1 00:19:58.103 00:19:58.103 ' 00:19:58.103 05:22:14 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.103 05:22:14 -- nvmf/common.sh@7 -- # uname -s 00:19:58.103 05:22:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.103 05:22:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.103 05:22:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.103 05:22:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.103 05:22:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.103 05:22:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.103 05:22:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.103 05:22:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.103 05:22:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.103 05:22:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.364 05:22:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:58.364 05:22:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:58.364 05:22:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.364 05:22:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.364 05:22:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.364 05:22:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:58.364 05:22:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.364 05:22:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.364 05:22:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.364 05:22:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.364 05:22:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.364 05:22:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.364 05:22:14 -- paths/export.sh@5 -- # export PATH 00:19:58.364 05:22:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.364 05:22:14 -- nvmf/common.sh@46 -- # : 0 00:19:58.364 05:22:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:58.364 05:22:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:58.364 05:22:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:58.364 05:22:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.364 05:22:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.364 05:22:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:58.364 05:22:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:58.364 05:22:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:58.364 05:22:14 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.364 05:22:14 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.364 05:22:14 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:58.364 05:22:14 -- target/fio.sh@16 -- # nvmftestinit 00:19:58.364 05:22:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:58.364 05:22:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.364 05:22:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:58.364 05:22:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:58.364 05:22:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:58.364 05:22:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.364 05:22:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.364 05:22:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.364 05:22:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:58.364 05:22:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:58.364 05:22:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:58.364 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:20:04.939 05:22:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:04.939 05:22:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:04.939 05:22:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:04.939 05:22:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:04.939 05:22:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:04.939 05:22:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:04.939 05:22:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:04.939 05:22:20 -- nvmf/common.sh@294 -- # net_devs=() 00:20:04.939 05:22:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:04.939 05:22:20 -- nvmf/common.sh@295 -- # e810=() 00:20:04.939 05:22:20 -- nvmf/common.sh@295 -- # local -ga e810 00:20:04.939 05:22:20 -- nvmf/common.sh@296 -- # x722=() 00:20:04.939 05:22:20 -- nvmf/common.sh@296 -- # local -ga x722 00:20:04.939 05:22:20 -- nvmf/common.sh@297 -- # mlx=() 00:20:04.939 05:22:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:04.939 05:22:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.939 05:22:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:04.939 05:22:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:04.939 05:22:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:04.939 05:22:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:04.939 05:22:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:04.939 05:22:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:04.939 05:22:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:04.939 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:04.939 05:22:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.939 05:22:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:04.939 05:22:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:04.939 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:04.939 05:22:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.939 05:22:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:04.939 05:22:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:04.939 05:22:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.939 05:22:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:04.939 05:22:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.939 05:22:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:04.939 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:04.939 05:22:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.939 05:22:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:04.939 05:22:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.939 05:22:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:04.939 05:22:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.939 05:22:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:04.939 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:04.939 05:22:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.939 05:22:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:04.939 05:22:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:04.939 05:22:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:04.939 05:22:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:04.939 05:22:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:04.940 05:22:20 -- nvmf/common.sh@57 -- # uname 00:20:04.940 05:22:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:04.940 05:22:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:04.940 05:22:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:04.940 05:22:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:04.940 05:22:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:04.940 05:22:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:04.940 05:22:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:04.940 05:22:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:04.940 05:22:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:04.940 05:22:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:04.940 05:22:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:04.940 05:22:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.940 05:22:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:04.940 05:22:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:04.940 05:22:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.940 05:22:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:04.940 05:22:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@104 -- # continue 2 00:20:04.940 05:22:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@104 -- # continue 2 00:20:04.940 05:22:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:04.940 05:22:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.940 05:22:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:04.940 05:22:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:04.940 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.940 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:04.940 altname enp217s0f0np0 00:20:04.940 altname ens818f0np0 00:20:04.940 inet 192.168.100.8/24 scope global mlx_0_0 00:20:04.940 valid_lft forever preferred_lft forever 00:20:04.940 05:22:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:04.940 05:22:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.940 05:22:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:04.940 05:22:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:04.940 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.940 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:04.940 altname enp217s0f1np1 00:20:04.940 altname ens818f1np1 00:20:04.940 inet 192.168.100.9/24 scope global mlx_0_1 00:20:04.940 valid_lft forever preferred_lft forever 00:20:04.940 05:22:21 -- nvmf/common.sh@410 -- # return 0 00:20:04.940 05:22:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:04.940 05:22:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:04.940 05:22:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:04.940 05:22:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:04.940 05:22:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.940 05:22:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:04.940 05:22:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:04.940 05:22:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.940 05:22:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:04.940 05:22:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@104 -- # continue 2 00:20:04.940 05:22:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.940 05:22:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.940 05:22:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@104 -- # continue 2 00:20:04.940 05:22:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:04.940 05:22:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.940 05:22:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:04.940 05:22:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.940 05:22:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.940 05:22:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:04.940 192.168.100.9' 00:20:04.940 05:22:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:04.940 192.168.100.9' 00:20:04.940 05:22:21 -- nvmf/common.sh@445 -- # head -n 1 00:20:04.940 05:22:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:04.940 05:22:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:04.940 192.168.100.9' 00:20:04.940 05:22:21 -- nvmf/common.sh@446 -- # tail -n +2 00:20:04.940 05:22:21 -- nvmf/common.sh@446 -- # head -n 1 00:20:04.940 05:22:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:04.940 05:22:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:04.940 05:22:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:04.940 05:22:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:04.940 05:22:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:04.940 05:22:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:04.940 05:22:21 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:04.940 05:22:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:04.940 05:22:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.940 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 05:22:21 -- nvmf/common.sh@469 -- # nvmfpid=1844523 00:20:04.940 05:22:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:04.940 05:22:21 -- nvmf/common.sh@470 -- # waitforlisten 1844523 00:20:04.940 05:22:21 -- common/autotest_common.sh@829 -- # '[' -z 1844523 ']' 00:20:04.940 05:22:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.940 05:22:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.940 05:22:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.940 05:22:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.940 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.940 [2024-11-19 05:22:21.269192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:04.940 [2024-11-19 05:22:21.269251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.940 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.940 [2024-11-19 05:22:21.342881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.940 [2024-11-19 05:22:21.380333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:04.940 [2024-11-19 05:22:21.380452] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.940 [2024-11-19 05:22:21.380463] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.940 [2024-11-19 05:22:21.380473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.940 [2024-11-19 05:22:21.380523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.940 [2024-11-19 05:22:21.380626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.940 [2024-11-19 05:22:21.380652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.940 [2024-11-19 05:22:21.380654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.879 05:22:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.879 05:22:22 -- common/autotest_common.sh@862 -- # return 0 00:20:05.879 05:22:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:05.879 05:22:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.879 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:20:05.879 05:22:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.879 05:22:22 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:05.879 [2024-11-19 05:22:22.311428] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2501200/0x25056f0) succeed. 00:20:05.879 [2024-11-19 05:22:22.320770] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x25027f0/0x2546d90) succeed. 00:20:06.139 05:22:22 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.139 05:22:22 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:06.139 05:22:22 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.398 05:22:22 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:06.398 05:22:22 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.657 05:22:23 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:06.657 05:22:23 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.916 05:22:23 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:06.916 05:22:23 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:06.916 05:22:23 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:07.176 05:22:23 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:07.176 05:22:23 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:07.435 05:22:23 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:07.435 05:22:23 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:07.693 05:22:24 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:07.693 05:22:24 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:07.952 05:22:24 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:07.952 05:22:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:07.952 05:22:24 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:08.212 05:22:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:08.212 05:22:24 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:08.471 05:22:24 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:08.472 [2024-11-19 05:22:25.024183] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:08.731 05:22:25 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:08.731 05:22:25 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:08.990 05:22:25 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:10.021 05:22:26 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:10.021 05:22:26 -- common/autotest_common.sh@1187 -- # local i=0 00:20:10.021 05:22:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:10.021 05:22:26 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:20:10.021 05:22:26 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:20:10.021 05:22:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:11.927 05:22:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:11.927 05:22:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:11.927 05:22:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:11.927 05:22:28 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:20:11.927 05:22:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:11.927 05:22:28 -- common/autotest_common.sh@1197 -- # return 0 00:20:11.927 05:22:28 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:11.927 [global] 00:20:11.927 thread=1 00:20:11.927 invalidate=1 00:20:11.927 rw=write 00:20:11.927 time_based=1 00:20:11.927 runtime=1 00:20:11.927 ioengine=libaio 00:20:11.927 direct=1 00:20:11.927 bs=4096 00:20:11.927 iodepth=1 00:20:11.927 norandommap=0 00:20:11.927 numjobs=1 00:20:11.927 00:20:11.927 verify_dump=1 00:20:11.927 verify_backlog=512 00:20:11.927 verify_state_save=0 00:20:11.927 do_verify=1 00:20:11.927 verify=crc32c-intel 00:20:11.927 [job0] 00:20:11.927 filename=/dev/nvme0n1 00:20:11.927 [job1] 00:20:11.927 filename=/dev/nvme0n2 00:20:11.927 [job2] 00:20:11.927 filename=/dev/nvme0n3 00:20:11.927 [job3] 00:20:11.927 filename=/dev/nvme0n4 00:20:12.198 Could not set queue depth (nvme0n1) 00:20:12.198 Could not set queue depth (nvme0n2) 00:20:12.198 Could not set queue depth (nvme0n3) 00:20:12.198 Could not set queue depth (nvme0n4) 00:20:12.456 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.456 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.456 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.456 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.456 fio-3.35 00:20:12.456 Starting 4 threads 00:20:13.830 00:20:13.830 job0: (groupid=0, jobs=1): err= 0: pid=1846081: Tue Nov 19 05:22:30 2024 00:20:13.830 read: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1001msec) 00:20:13.830 slat (nsec): min=8436, max=26805, avg=9037.65, stdev=789.24 00:20:13.830 clat (usec): min=64, max=153, avg=99.04, stdev=13.43 00:20:13.830 lat (usec): min=73, max=163, avg=108.08, stdev=13.47 00:20:13.830 clat percentiles (usec): 00:20:13.830 | 1.00th=[ 71], 5.00th=[ 75], 10.00th=[ 78], 20.00th=[ 85], 00:20:13.830 | 30.00th=[ 96], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 105], 00:20:13.830 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 114], 95.00th=[ 118], 00:20:13.830 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 143], 99.95th=[ 145], 00:20:13.830 | 99.99th=[ 155] 00:20:13.830 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:20:13.830 slat (nsec): min=8554, max=59025, avg=11506.28, stdev=1307.70 00:20:13.830 clat (usec): min=53, max=162, avg=93.75, stdev=12.91 00:20:13.830 lat (usec): min=70, max=173, avg=105.26, stdev=12.89 00:20:13.830 clat percentiles (usec): 00:20:13.830 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 80], 00:20:13.830 | 30.00th=[ 89], 40.00th=[ 94], 50.00th=[ 97], 60.00th=[ 99], 00:20:13.830 | 70.00th=[ 102], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 113], 00:20:13.830 | 99.00th=[ 119], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 141], 00:20:13.830 | 99.99th=[ 163] 00:20:13.830 bw ( KiB/s): min=18776, max=18776, per=26.61%, avg=18776.00, stdev= 0.00, samples=1 00:20:13.830 iops : min= 4694, max= 4694, avg=4694.00, stdev= 0.00, samples=1 00:20:13.830 lat (usec) : 100=52.96%, 250=47.04% 00:20:13.830 cpu : usr=7.20%, sys=12.30%, ctx=9158, majf=0, minf=1 00:20:13.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.830 issued rwts: total=4549,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.830 job1: (groupid=0, jobs=1): err= 0: pid=1846082: Tue Nov 19 05:22:30 2024 00:20:13.830 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:13.830 slat (nsec): min=8404, max=30197, avg=9313.80, stdev=1604.10 00:20:13.830 clat (usec): min=73, max=170, avg=106.31, stdev= 8.58 00:20:13.830 lat (usec): min=82, max=179, avg=115.62, stdev= 8.83 00:20:13.830 clat percentiles (usec): 00:20:13.831 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 100], 00:20:13.831 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 108], 00:20:13.831 | 70.00th=[ 111], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 121], 00:20:13.831 | 99.00th=[ 133], 99.50th=[ 135], 99.90th=[ 159], 99.95th=[ 163], 00:20:13.831 | 99.99th=[ 172] 00:20:13.831 write: IOPS=4400, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1001msec); 0 zone resets 00:20:13.831 slat (nsec): min=10397, max=53431, avg=11953.28, stdev=2099.23 00:20:13.831 clat (usec): min=69, max=185, avg=102.50, stdev=10.90 00:20:13.831 lat (usec): min=80, max=238, avg=114.45, stdev=11.53 00:20:13.831 clat percentiles (usec): 00:20:13.831 | 1.00th=[ 83], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 95], 00:20:13.831 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 103], 00:20:13.831 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 122], 00:20:13.831 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 172], 99.95th=[ 178], 00:20:13.831 | 99.99th=[ 186] 00:20:13.831 bw ( KiB/s): min=18760, max=18760, per=26.59%, avg=18760.00, stdev= 0.00, samples=1 00:20:13.831 iops : min= 4690, max= 4690, avg=4690.00, stdev= 0.00, samples=1 00:20:13.831 lat (usec) : 100=34.10%, 250=65.90% 00:20:13.831 cpu : usr=6.60%, sys=11.80%, ctx=8501, majf=0, minf=1 00:20:13.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.831 issued rwts: total=4096,4405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.831 job2: (groupid=0, jobs=1): err= 0: pid=1846083: Tue Nov 19 05:22:30 2024 00:20:13.831 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:13.831 slat (nsec): min=8557, max=41526, avg=9257.75, stdev=1214.18 00:20:13.831 clat (usec): min=74, max=185, avg=106.91, stdev=16.11 00:20:13.831 lat (usec): min=84, max=204, avg=116.17, stdev=16.22 00:20:13.831 clat percentiles (usec): 00:20:13.831 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:20:13.831 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 111], 60.00th=[ 116], 00:20:13.831 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 126], 95.00th=[ 130], 00:20:13.831 | 99.00th=[ 143], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 180], 00:20:13.831 | 99.99th=[ 186] 00:20:13.831 write: IOPS=4428, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1001msec); 0 zone resets 00:20:13.831 slat (nsec): min=10458, max=41798, avg=11674.61, stdev=1161.19 00:20:13.831 clat (usec): min=70, max=159, avg=101.82, stdev=13.89 00:20:13.831 lat (usec): min=82, max=182, avg=113.49, stdev=13.87 00:20:13.831 clat percentiles (usec): 00:20:13.831 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 87], 00:20:13.831 | 30.00th=[ 91], 40.00th=[ 98], 50.00th=[ 106], 60.00th=[ 110], 00:20:13.831 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 118], 95.00th=[ 121], 00:20:13.831 | 99.00th=[ 130], 99.50th=[ 141], 99.90th=[ 153], 99.95th=[ 157], 00:20:13.831 | 99.99th=[ 159] 00:20:13.831 bw ( KiB/s): min=16384, max=16384, per=23.22%, avg=16384.00, stdev= 0.00, samples=1 00:20:13.831 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:13.831 lat (usec) : 100=41.90%, 250=58.10% 00:20:13.831 cpu : usr=7.00%, sys=11.30%, ctx=8530, majf=0, minf=1 00:20:13.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.831 issued rwts: total=4096,4433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.831 job3: (groupid=0, jobs=1): err= 0: pid=1846084: Tue Nov 19 05:22:30 2024 00:20:13.831 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:13.831 slat (nsec): min=8602, max=30429, avg=9303.73, stdev=781.72 00:20:13.831 clat (usec): min=71, max=174, avg=110.47, stdev=16.24 00:20:13.831 lat (usec): min=80, max=183, avg=119.77, stdev=16.27 00:20:13.831 clat percentiles (usec): 00:20:13.831 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 90], 00:20:13.831 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 118], 00:20:13.831 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 131], 00:20:13.831 | 99.00th=[ 143], 99.50th=[ 151], 99.90th=[ 165], 99.95th=[ 172], 00:20:13.831 | 99.99th=[ 176] 00:20:13.831 write: IOPS=4208, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1001msec); 0 zone resets 00:20:13.831 slat (nsec): min=10483, max=68157, avg=11760.88, stdev=1279.60 00:20:13.831 clat (usec): min=69, max=162, avg=104.26, stdev=15.11 00:20:13.831 lat (usec): min=81, max=190, avg=116.02, stdev=15.15 00:20:13.831 clat percentiles (usec): 00:20:13.831 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 85], 00:20:13.831 | 30.00th=[ 101], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 112], 00:20:13.831 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 120], 95.00th=[ 123], 00:20:13.831 | 99.00th=[ 133], 99.50th=[ 141], 99.90th=[ 153], 99.95th=[ 155], 00:20:13.831 | 99.99th=[ 163] 00:20:13.831 bw ( KiB/s): min=16384, max=16384, per=23.22%, avg=16384.00, stdev= 0.00, samples=1 00:20:13.831 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:13.831 lat (usec) : 100=27.44%, 250=72.56% 00:20:13.831 cpu : usr=7.00%, sys=10.90%, ctx=8310, majf=0, minf=1 00:20:13.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.831 issued rwts: total=4096,4213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.831 00:20:13.831 Run status group 0 (all jobs): 00:20:13.831 READ: bw=65.7MiB/s (68.9MB/s), 16.0MiB/s-17.8MiB/s (16.8MB/s-18.6MB/s), io=65.8MiB (69.0MB), run=1001-1001msec 00:20:13.831 WRITE: bw=68.9MiB/s (72.3MB/s), 16.4MiB/s-18.0MiB/s (17.2MB/s-18.9MB/s), io=69.0MiB (72.3MB), run=1001-1001msec 00:20:13.831 00:20:13.831 Disk stats (read/write): 00:20:13.831 nvme0n1: ios=3717/4096, merge=0/0, ticks=338/349, in_queue=687, util=84.27% 00:20:13.831 nvme0n2: ios=3523/3584, merge=0/0, ticks=336/347, in_queue=683, util=85.47% 00:20:13.831 nvme0n3: ios=3336/3584, merge=0/0, ticks=337/351, in_queue=688, util=88.45% 00:20:13.831 nvme0n4: ios=3072/3541, merge=0/0, ticks=333/343, in_queue=676, util=89.50% 00:20:13.831 05:22:30 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:13.831 [global] 00:20:13.831 thread=1 00:20:13.831 invalidate=1 00:20:13.831 rw=randwrite 00:20:13.831 time_based=1 00:20:13.831 runtime=1 00:20:13.831 ioengine=libaio 00:20:13.831 direct=1 00:20:13.831 bs=4096 00:20:13.831 iodepth=1 00:20:13.831 norandommap=0 00:20:13.831 numjobs=1 00:20:13.831 00:20:13.831 verify_dump=1 00:20:13.831 verify_backlog=512 00:20:13.831 verify_state_save=0 00:20:13.831 do_verify=1 00:20:13.831 verify=crc32c-intel 00:20:13.831 [job0] 00:20:13.831 filename=/dev/nvme0n1 00:20:13.831 [job1] 00:20:13.831 filename=/dev/nvme0n2 00:20:13.831 [job2] 00:20:13.831 filename=/dev/nvme0n3 00:20:13.831 [job3] 00:20:13.831 filename=/dev/nvme0n4 00:20:13.831 Could not set queue depth (nvme0n1) 00:20:13.831 Could not set queue depth (nvme0n2) 00:20:13.831 Could not set queue depth (nvme0n3) 00:20:13.831 Could not set queue depth (nvme0n4) 00:20:14.089 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.089 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.089 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.089 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.089 fio-3.35 00:20:14.089 Starting 4 threads 00:20:15.463 00:20:15.463 job0: (groupid=0, jobs=1): err= 0: pid=1846504: Tue Nov 19 05:22:31 2024 00:20:15.463 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:20:15.463 slat (nsec): min=8410, max=20000, avg=9069.50, stdev=820.12 00:20:15.463 clat (usec): min=61, max=205, avg=94.83, stdev=24.54 00:20:15.463 lat (usec): min=71, max=215, avg=103.90, stdev=24.66 00:20:15.463 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:20:15.464 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 108], 00:20:15.464 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 126], 95.00th=[ 137], 00:20:15.464 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 190], 00:20:15.464 | 99.99th=[ 206] 00:20:15.464 write: IOPS=4781, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1001msec); 0 zone resets 00:20:15.464 slat (nsec): min=8599, max=60922, avg=11154.54, stdev=1444.20 00:20:15.464 clat (usec): min=57, max=321, avg=92.86, stdev=24.25 00:20:15.464 lat (usec): min=70, max=332, avg=104.02, stdev=24.26 00:20:15.464 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:20:15.464 | 30.00th=[ 73], 40.00th=[ 76], 50.00th=[ 81], 60.00th=[ 103], 00:20:15.464 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 127], 95.00th=[ 133], 00:20:15.464 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 182], 00:20:15.464 | 99.99th=[ 322] 00:20:15.464 bw ( KiB/s): min=16384, max=16384, per=25.37%, avg=16384.00, stdev= 0.00, samples=1 00:20:15.464 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:15.464 lat (usec) : 100=56.33%, 250=43.66%, 500=0.01% 00:20:15.464 cpu : usr=7.00%, sys=12.70%, ctx=9396, majf=0, minf=1 00:20:15.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 issued rwts: total=4608,4786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:15.464 job1: (groupid=0, jobs=1): err= 0: pid=1846505: Tue Nov 19 05:22:31 2024 00:20:15.464 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:20:15.464 slat (nsec): min=8389, max=33051, avg=9101.66, stdev=1165.47 00:20:15.464 clat (usec): min=65, max=209, avg=126.92, stdev=22.06 00:20:15.464 lat (usec): min=74, max=218, avg=136.02, stdev=22.08 00:20:15.464 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 75], 5.00th=[ 92], 10.00th=[ 105], 20.00th=[ 111], 00:20:15.464 | 30.00th=[ 114], 40.00th=[ 118], 50.00th=[ 125], 60.00th=[ 133], 00:20:15.464 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 165], 00:20:15.464 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 200], 99.95th=[ 206], 00:20:15.464 | 99.99th=[ 210] 00:20:15.464 write: IOPS=3689, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec); 0 zone resets 00:20:15.464 slat (nsec): min=10069, max=40980, avg=11188.87, stdev=1630.84 00:20:15.464 clat (usec): min=62, max=200, avg=123.03, stdev=20.97 00:20:15.464 lat (usec): min=73, max=211, avg=134.22, stdev=20.90 00:20:15.464 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 71], 5.00th=[ 85], 10.00th=[ 102], 20.00th=[ 108], 00:20:15.464 | 30.00th=[ 112], 40.00th=[ 117], 50.00th=[ 122], 60.00th=[ 128], 00:20:15.464 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 161], 00:20:15.464 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 194], 00:20:15.464 | 99.99th=[ 200] 00:20:15.464 bw ( KiB/s): min=16384, max=16384, per=25.37%, avg=16384.00, stdev= 0.00, samples=1 00:20:15.464 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:15.464 lat (usec) : 100=7.24%, 250=92.76% 00:20:15.464 cpu : usr=4.70%, sys=10.50%, ctx=7277, majf=0, minf=1 00:20:15.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 issued rwts: total=3584,3693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:15.464 job2: (groupid=0, jobs=1): err= 0: pid=1846506: Tue Nov 19 05:22:31 2024 00:20:15.464 read: IOPS=3678, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec) 00:20:15.464 slat (nsec): min=8553, max=39520, avg=9804.87, stdev=2757.06 00:20:15.464 clat (usec): min=57, max=199, avg=112.80, stdev=24.21 00:20:15.464 lat (usec): min=79, max=208, avg=122.60, stdev=23.72 00:20:15.464 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 88], 00:20:15.464 | 30.00th=[ 100], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 116], 00:20:15.464 | 70.00th=[ 126], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 153], 00:20:15.464 | 99.00th=[ 172], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 198], 00:20:15.464 | 99.99th=[ 200] 00:20:15.464 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:15.464 slat (nsec): min=10393, max=46176, avg=12017.39, stdev=2963.85 00:20:15.464 clat (usec): min=56, max=210, avg=117.20, stdev=25.40 00:20:15.464 lat (usec): min=76, max=221, avg=129.21, stdev=24.81 00:20:15.464 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 72], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 93], 00:20:15.464 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 123], 00:20:15.464 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 159], 00:20:15.464 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 198], 99.95th=[ 202], 00:20:15.464 | 99.99th=[ 210] 00:20:15.464 bw ( KiB/s): min=18296, max=18296, per=28.33%, avg=18296.00, stdev= 0.00, samples=1 00:20:15.464 iops : min= 4574, max= 4574, avg=4574.00, stdev= 0.00, samples=1 00:20:15.464 lat (usec) : 100=25.60%, 250=74.40% 00:20:15.464 cpu : usr=6.30%, sys=10.00%, ctx=7778, majf=0, minf=1 00:20:15.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 issued rwts: total=3682,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:15.464 job3: (groupid=0, jobs=1): err= 0: pid=1846507: Tue Nov 19 05:22:31 2024 00:20:15.464 read: IOPS=3540, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec) 00:20:15.464 slat (nsec): min=8585, max=29736, avg=10093.90, stdev=2354.36 00:20:15.464 clat (usec): min=74, max=221, avg=129.36, stdev=20.58 00:20:15.464 lat (usec): min=83, max=236, avg=139.45, stdev=21.40 00:20:15.464 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 85], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 113], 00:20:15.464 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 127], 60.00th=[ 135], 00:20:15.464 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 157], 95.00th=[ 172], 00:20:15.464 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 200], 99.95th=[ 208], 00:20:15.464 | 99.99th=[ 223] 00:20:15.464 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:20:15.464 slat (nsec): min=10274, max=39134, avg=12089.97, stdev=2523.12 00:20:15.464 clat (usec): min=66, max=198, avg=123.90, stdev=22.03 00:20:15.464 lat (usec): min=77, max=217, avg=135.99, stdev=22.86 00:20:15.464 clat percentiles (usec): 00:20:15.464 | 1.00th=[ 81], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 105], 00:20:15.464 | 30.00th=[ 109], 40.00th=[ 114], 50.00th=[ 122], 60.00th=[ 128], 00:20:15.464 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 155], 95.00th=[ 169], 00:20:15.464 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 196], 00:20:15.464 | 99.99th=[ 200] 00:20:15.464 bw ( KiB/s): min=16384, max=16384, per=25.37%, avg=16384.00, stdev= 0.00, samples=1 00:20:15.464 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:15.464 lat (usec) : 100=5.64%, 250=94.36% 00:20:15.464 cpu : usr=5.20%, sys=10.30%, ctx=7128, majf=0, minf=1 00:20:15.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.464 issued rwts: total=3544,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:15.464 00:20:15.464 Run status group 0 (all jobs): 00:20:15.464 READ: bw=60.2MiB/s (63.1MB/s), 13.8MiB/s-18.0MiB/s (14.5MB/s-18.9MB/s), io=60.2MiB (63.2MB), run=1001-1001msec 00:20:15.464 WRITE: bw=63.1MiB/s (66.1MB/s), 14.0MiB/s-18.7MiB/s (14.7MB/s-19.6MB/s), io=63.1MiB (66.2MB), run=1001-1001msec 00:20:15.464 00:20:15.464 Disk stats (read/write): 00:20:15.464 nvme0n1: ios=3633/3863, merge=0/0, ticks=343/333, in_queue=676, util=84.25% 00:20:15.464 nvme0n2: ios=3072/3076, merge=0/0, ticks=350/322, in_queue=672, util=85.52% 00:20:15.464 nvme0n3: ios=3072/3580, merge=0/0, ticks=308/379, in_queue=687, util=88.43% 00:20:15.464 nvme0n4: ios=2991/3072, merge=0/0, ticks=363/343, in_queue=706, util=89.47% 00:20:15.464 05:22:31 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:15.464 [global] 00:20:15.464 thread=1 00:20:15.464 invalidate=1 00:20:15.464 rw=write 00:20:15.464 time_based=1 00:20:15.464 runtime=1 00:20:15.464 ioengine=libaio 00:20:15.464 direct=1 00:20:15.464 bs=4096 00:20:15.464 iodepth=128 00:20:15.464 norandommap=0 00:20:15.464 numjobs=1 00:20:15.464 00:20:15.464 verify_dump=1 00:20:15.464 verify_backlog=512 00:20:15.464 verify_state_save=0 00:20:15.464 do_verify=1 00:20:15.464 verify=crc32c-intel 00:20:15.464 [job0] 00:20:15.464 filename=/dev/nvme0n1 00:20:15.464 [job1] 00:20:15.464 filename=/dev/nvme0n2 00:20:15.464 [job2] 00:20:15.464 filename=/dev/nvme0n3 00:20:15.464 [job3] 00:20:15.464 filename=/dev/nvme0n4 00:20:15.464 Could not set queue depth (nvme0n1) 00:20:15.464 Could not set queue depth (nvme0n2) 00:20:15.464 Could not set queue depth (nvme0n3) 00:20:15.464 Could not set queue depth (nvme0n4) 00:20:15.723 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:15.723 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:15.723 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:15.723 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:15.723 fio-3.35 00:20:15.723 Starting 4 threads 00:20:17.104 00:20:17.104 job0: (groupid=0, jobs=1): err= 0: pid=1846936: Tue Nov 19 05:22:33 2024 00:20:17.104 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(39.3MiB/1005msec) 00:20:17.104 slat (usec): min=2, max=1054, avg=48.87, stdev=177.82 00:20:17.104 clat (usec): min=4263, max=10470, avg=6453.05, stdev=371.36 00:20:17.104 lat (usec): min=5063, max=10478, avg=6501.92, stdev=374.05 00:20:17.104 clat percentiles (usec): 00:20:17.104 | 1.00th=[ 5604], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 6259], 00:20:17.104 | 30.00th=[ 6325], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6521], 00:20:17.104 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6783], 95.00th=[ 6915], 00:20:17.104 | 99.00th=[ 7373], 99.50th=[ 8094], 99.90th=[ 9765], 99.95th=[10421], 00:20:17.104 | 99.99th=[10421] 00:20:17.104 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(40.0MiB/1005msec); 0 zone resets 00:20:17.104 slat (usec): min=2, max=1611, avg=46.97, stdev=169.87 00:20:17.104 clat (usec): min=1338, max=7419, avg=6105.60, stdev=409.20 00:20:17.104 lat (usec): min=1352, max=7603, avg=6152.57, stdev=414.94 00:20:17.104 clat percentiles (usec): 00:20:17.104 | 1.00th=[ 4686], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 5997], 00:20:17.104 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6194], 00:20:17.104 | 70.00th=[ 6259], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6587], 00:20:17.104 | 99.00th=[ 6849], 99.50th=[ 6915], 99.90th=[ 7177], 99.95th=[ 7242], 00:20:17.104 | 99.99th=[ 7308] 00:20:17.104 bw ( KiB/s): min=40960, max=40960, per=35.50%, avg=40960.00, stdev= 0.00, samples=2 00:20:17.104 iops : min=10240, max=10240, avg=10240.00, stdev= 0.00, samples=2 00:20:17.104 lat (msec) : 2=0.02%, 4=0.27%, 10=99.67%, 20=0.04% 00:20:17.104 cpu : usr=3.69%, sys=6.67%, ctx=1260, majf=0, minf=2 00:20:17.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:17.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.104 issued rwts: total=10073,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.104 job1: (groupid=0, jobs=1): err= 0: pid=1846937: Tue Nov 19 05:22:33 2024 00:20:17.104 read: IOPS=9711, BW=37.9MiB/s (39.8MB/s)(38.0MiB/1002msec) 00:20:17.104 slat (usec): min=2, max=1484, avg=50.10, stdev=183.52 00:20:17.104 clat (usec): min=1800, max=7325, avg=6554.31, stdev=261.41 00:20:17.104 lat (usec): min=2473, max=8030, avg=6604.41, stdev=236.54 00:20:17.104 clat percentiles (usec): 00:20:17.104 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6456], 00:20:17.104 | 30.00th=[ 6521], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6587], 00:20:17.104 | 70.00th=[ 6652], 80.00th=[ 6718], 90.00th=[ 6849], 95.00th=[ 6915], 00:20:17.104 | 99.00th=[ 7046], 99.50th=[ 7111], 99.90th=[ 7111], 99.95th=[ 7308], 00:20:17.104 | 99.99th=[ 7308] 00:20:17.104 write: IOPS=10.2k, BW=39.9MiB/s (41.9MB/s)(40.0MiB/1002msec); 0 zone resets 00:20:17.104 slat (usec): min=2, max=2005, avg=47.49, stdev=173.34 00:20:17.104 clat (usec): min=2518, max=8080, avg=6176.57, stdev=328.88 00:20:17.104 lat (usec): min=2527, max=8083, avg=6224.06, stdev=313.12 00:20:17.104 clat percentiles (usec): 00:20:17.104 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6063], 00:20:17.104 | 30.00th=[ 6128], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6259], 00:20:17.104 | 70.00th=[ 6325], 80.00th=[ 6390], 90.00th=[ 6456], 95.00th=[ 6521], 00:20:17.104 | 99.00th=[ 6652], 99.50th=[ 6718], 99.90th=[ 8029], 99.95th=[ 8029], 00:20:17.104 | 99.99th=[ 8094] 00:20:17.104 bw ( KiB/s): min=39968, max=40960, per=35.07%, avg=40464.00, stdev=701.45, samples=2 00:20:17.104 iops : min= 9992, max=10240, avg=10116.00, stdev=175.36, samples=2 00:20:17.104 lat (msec) : 2=0.01%, 4=0.16%, 10=99.83% 00:20:17.104 cpu : usr=3.40%, sys=6.19%, ctx=1242, majf=0, minf=1 00:20:17.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:17.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.104 issued rwts: total=9731,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.104 job2: (groupid=0, jobs=1): err= 0: pid=1846938: Tue Nov 19 05:22:33 2024 00:20:17.104 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:20:17.104 slat (usec): min=2, max=1222, avg=121.65, stdev=312.00 00:20:17.104 clat (usec): min=13885, max=17334, avg=15680.26, stdev=499.30 00:20:17.104 lat (usec): min=14110, max=17352, avg=15801.91, stdev=480.69 00:20:17.104 clat percentiles (usec): 00:20:17.104 | 1.00th=[14353], 5.00th=[14746], 10.00th=[15008], 20.00th=[15270], 00:20:17.104 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[15926], 00:20:17.104 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:20:17.104 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:20:17.104 | 99.99th=[17433] 00:20:17.104 write: IOPS=4239, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1004msec); 0 zone resets 00:20:17.104 slat (usec): min=2, max=2796, avg=114.95, stdev=297.66 00:20:17.104 clat (usec): min=3355, max=18457, avg=14726.20, stdev=1106.32 00:20:17.104 lat (usec): min=4265, max=18460, avg=14841.15, stdev=1097.72 00:20:17.104 clat percentiles (usec): 00:20:17.104 | 1.00th=[ 8979], 5.00th=[13829], 10.00th=[14091], 20.00th=[14484], 00:20:17.104 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:20:17.104 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15401], 95.00th=[15533], 00:20:17.104 | 99.00th=[16188], 99.50th=[16450], 99.90th=[17433], 99.95th=[18482], 00:20:17.104 | 99.99th=[18482] 00:20:17.104 bw ( KiB/s): min=16384, max=16648, per=14.31%, avg=16516.00, stdev=186.68, samples=2 00:20:17.104 iops : min= 4096, max= 4162, avg=4129.00, stdev=46.67, samples=2 00:20:17.104 lat (msec) : 4=0.01%, 10=0.66%, 20=99.33% 00:20:17.104 cpu : usr=1.50%, sys=3.49%, ctx=1175, majf=0, minf=1 00:20:17.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:17.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.104 issued rwts: total=4096,4256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.104 job3: (groupid=0, jobs=1): err= 0: pid=1846939: Tue Nov 19 05:22:33 2024 00:20:17.104 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:20:17.104 slat (usec): min=2, max=1173, avg=121.47, stdev=311.32 00:20:17.104 clat (usec): min=14054, max=17383, avg=15679.86, stdev=480.33 00:20:17.104 lat (usec): min=14103, max=17452, avg=15801.33, stdev=455.42 00:20:17.104 clat percentiles (usec): 00:20:17.104 | 1.00th=[14353], 5.00th=[14746], 10.00th=[15008], 20.00th=[15270], 00:20:17.104 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[15926], 00:20:17.104 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:20:17.104 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:20:17.104 | 99.99th=[17433] 00:20:17.104 write: IOPS=4240, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1004msec); 0 zone resets 00:20:17.104 slat (usec): min=2, max=1642, avg=115.10, stdev=296.48 00:20:17.104 clat (usec): min=3377, max=19408, avg=14741.58, stdev=1087.73 00:20:17.105 lat (usec): min=4292, max=19411, avg=14856.68, stdev=1082.37 00:20:17.105 clat percentiles (usec): 00:20:17.105 | 1.00th=[ 8979], 5.00th=[13829], 10.00th=[14091], 20.00th=[14484], 00:20:17.105 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:20:17.105 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15401], 95.00th=[15533], 00:20:17.105 | 99.00th=[16188], 99.50th=[16581], 99.90th=[18482], 99.95th=[19530], 00:20:17.105 | 99.99th=[19530] 00:20:17.105 bw ( KiB/s): min=16384, max=16656, per=14.32%, avg=16520.00, stdev=192.33, samples=2 00:20:17.105 iops : min= 4096, max= 4164, avg=4130.00, stdev=48.08, samples=2 00:20:17.105 lat (msec) : 4=0.01%, 10=0.66%, 20=99.33% 00:20:17.105 cpu : usr=1.50%, sys=3.49%, ctx=1202, majf=0, minf=1 00:20:17.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:17.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.105 issued rwts: total=4096,4257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.105 00:20:17.105 Run status group 0 (all jobs): 00:20:17.105 READ: bw=109MiB/s (114MB/s), 15.9MiB/s-39.2MiB/s (16.7MB/s-41.1MB/s), io=109MiB (115MB), run=1002-1005msec 00:20:17.105 WRITE: bw=113MiB/s (118MB/s), 16.6MiB/s-39.9MiB/s (17.4MB/s-41.9MB/s), io=113MiB (119MB), run=1002-1005msec 00:20:17.105 00:20:17.105 Disk stats (read/write): 00:20:17.105 nvme0n1: ios=8074/8192, merge=0/0, ticks=50968/49260, in_queue=100228, util=81.44% 00:20:17.105 nvme0n2: ios=7836/8192, merge=0/0, ticks=25154/24751, in_queue=49905, util=82.84% 00:20:17.105 nvme0n3: ios=3109/3584, merge=0/0, ticks=16179/17374, in_queue=33553, util=87.51% 00:20:17.105 nvme0n4: ios=3111/3584, merge=0/0, ticks=16173/17406, in_queue=33579, util=89.16% 00:20:17.105 05:22:33 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:17.105 [global] 00:20:17.105 thread=1 00:20:17.105 invalidate=1 00:20:17.105 rw=randwrite 00:20:17.105 time_based=1 00:20:17.105 runtime=1 00:20:17.105 ioengine=libaio 00:20:17.105 direct=1 00:20:17.105 bs=4096 00:20:17.105 iodepth=128 00:20:17.105 norandommap=0 00:20:17.105 numjobs=1 00:20:17.105 00:20:17.105 verify_dump=1 00:20:17.105 verify_backlog=512 00:20:17.105 verify_state_save=0 00:20:17.105 do_verify=1 00:20:17.105 verify=crc32c-intel 00:20:17.105 [job0] 00:20:17.105 filename=/dev/nvme0n1 00:20:17.105 [job1] 00:20:17.105 filename=/dev/nvme0n2 00:20:17.105 [job2] 00:20:17.105 filename=/dev/nvme0n3 00:20:17.105 [job3] 00:20:17.105 filename=/dev/nvme0n4 00:20:17.105 Could not set queue depth (nvme0n1) 00:20:17.105 Could not set queue depth (nvme0n2) 00:20:17.105 Could not set queue depth (nvme0n3) 00:20:17.105 Could not set queue depth (nvme0n4) 00:20:17.363 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:17.363 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:17.363 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:17.363 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:17.363 fio-3.35 00:20:17.363 Starting 4 threads 00:20:18.737 00:20:18.737 job0: (groupid=0, jobs=1): err= 0: pid=1847361: Tue Nov 19 05:22:34 2024 00:20:18.737 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec) 00:20:18.737 slat (usec): min=2, max=1118, avg=56.84, stdev=210.26 00:20:18.737 clat (usec): min=6137, max=8005, avg=7391.08, stdev=254.95 00:20:18.737 lat (usec): min=6233, max=8734, avg=7447.92, stdev=227.61 00:20:18.737 clat percentiles (usec): 00:20:18.737 | 1.00th=[ 6456], 5.00th=[ 6718], 10.00th=[ 7177], 20.00th=[ 7308], 00:20:18.737 | 30.00th=[ 7373], 40.00th=[ 7373], 50.00th=[ 7439], 60.00th=[ 7439], 00:20:18.737 | 70.00th=[ 7504], 80.00th=[ 7570], 90.00th=[ 7635], 95.00th=[ 7701], 00:20:18.737 | 99.00th=[ 7832], 99.50th=[ 7832], 99.90th=[ 8029], 99.95th=[ 8029], 00:20:18.737 | 99.99th=[ 8029] 00:20:18.737 write: IOPS=8938, BW=34.9MiB/s (36.6MB/s)(35.0MiB/1003msec); 0 zone resets 00:20:18.737 slat (usec): min=2, max=1749, avg=54.06, stdev=199.87 00:20:18.737 clat (usec): min=2043, max=8923, avg=7005.79, stdev=387.95 00:20:18.737 lat (usec): min=2798, max=8927, avg=7059.85, stdev=374.63 00:20:18.737 clat percentiles (usec): 00:20:18.737 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 6915], 00:20:18.737 | 30.00th=[ 6980], 40.00th=[ 7046], 50.00th=[ 7046], 60.00th=[ 7111], 00:20:18.737 | 70.00th=[ 7177], 80.00th=[ 7177], 90.00th=[ 7242], 95.00th=[ 7308], 00:20:18.737 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8848], 99.95th=[ 8848], 00:20:18.737 | 99.99th=[ 8979] 00:20:18.737 bw ( KiB/s): min=33772, max=36864, per=27.30%, avg=35318.00, stdev=2186.37, samples=2 00:20:18.737 iops : min= 8443, max= 9216, avg=8829.50, stdev=546.59, samples=2 00:20:18.737 lat (msec) : 4=0.18%, 10=99.82% 00:20:18.738 cpu : usr=3.09%, sys=4.59%, ctx=1099, majf=0, minf=1 00:20:18.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:18.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.738 issued rwts: total=8704,8965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.738 job1: (groupid=0, jobs=1): err= 0: pid=1847362: Tue Nov 19 05:22:34 2024 00:20:18.738 read: IOPS=8679, BW=33.9MiB/s (35.6MB/s)(33.9MiB/1001msec) 00:20:18.738 slat (usec): min=2, max=1683, avg=57.29, stdev=203.52 00:20:18.738 clat (usec): min=377, max=9113, avg=7450.60, stdev=671.74 00:20:18.738 lat (usec): min=1184, max=9117, avg=7507.89, stdev=692.71 00:20:18.738 clat percentiles (usec): 00:20:18.738 | 1.00th=[ 5997], 5.00th=[ 6783], 10.00th=[ 6849], 20.00th=[ 7046], 00:20:18.738 | 30.00th=[ 7177], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:20:18.738 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:20:18.738 | 99.00th=[ 8848], 99.50th=[ 8848], 99.90th=[ 8979], 99.95th=[ 8979], 00:20:18.738 | 99.99th=[ 9110] 00:20:18.738 write: IOPS=8695, BW=34.0MiB/s (35.6MB/s)(34.0MiB/1001msec); 0 zone resets 00:20:18.738 slat (usec): min=2, max=1276, avg=55.27, stdev=192.48 00:20:18.738 clat (usec): min=6168, max=8799, avg=7122.77, stdev=529.58 00:20:18.738 lat (usec): min=6171, max=8801, avg=7178.04, stdev=555.59 00:20:18.738 clat percentiles (usec): 00:20:18.738 | 1.00th=[ 6259], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:20:18.738 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7111], 00:20:18.738 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8160], 00:20:18.738 | 99.00th=[ 8455], 99.50th=[ 8455], 99.90th=[ 8717], 99.95th=[ 8717], 00:20:18.738 | 99.99th=[ 8848] 00:20:18.738 bw ( KiB/s): min=35976, max=35976, per=27.81%, avg=35976.00, stdev= 0.00, samples=1 00:20:18.738 iops : min= 8994, max= 8994, avg=8994.00, stdev= 0.00, samples=1 00:20:18.738 lat (usec) : 500=0.01% 00:20:18.738 lat (msec) : 2=0.05%, 4=0.24%, 10=99.71% 00:20:18.738 cpu : usr=2.70%, sys=5.10%, ctx=1364, majf=0, minf=1 00:20:18.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:18.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.738 issued rwts: total=8688,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.738 job2: (groupid=0, jobs=1): err= 0: pid=1847369: Tue Nov 19 05:22:34 2024 00:20:18.738 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:20:18.738 slat (usec): min=2, max=1432, avg=69.22, stdev=260.93 00:20:18.738 clat (usec): min=7405, max=9815, avg=9008.72, stdev=347.77 00:20:18.738 lat (usec): min=7518, max=11098, avg=9077.94, stdev=318.82 00:20:18.738 clat percentiles (usec): 00:20:18.738 | 1.00th=[ 7767], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 8848], 00:20:18.738 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 9110], 00:20:18.738 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9503], 00:20:18.738 | 99.00th=[ 9765], 99.50th=[ 9765], 99.90th=[ 9765], 99.95th=[ 9765], 00:20:18.738 | 99.99th=[ 9765] 00:20:18.738 write: IOPS=7281, BW=28.4MiB/s (29.8MB/s)(28.5MiB/1003msec); 0 zone resets 00:20:18.738 slat (usec): min=2, max=3084, avg=66.09, stdev=249.53 00:20:18.738 clat (usec): min=2354, max=11566, avg=8557.88, stdev=510.28 00:20:18.738 lat (usec): min=3277, max=11570, avg=8623.97, stdev=492.70 00:20:18.738 clat percentiles (usec): 00:20:18.738 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[ 8455], 00:20:18.738 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8717], 00:20:18.738 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8848], 95.00th=[ 9110], 00:20:18.738 | 99.00th=[ 9503], 99.50th=[ 9765], 99.90th=[11469], 99.95th=[11600], 00:20:18.738 | 99.99th=[11600] 00:20:18.738 bw ( KiB/s): min=28672, max=28678, per=22.17%, avg=28675.00, stdev= 4.24, samples=2 00:20:18.738 iops : min= 7168, max= 7169, avg=7168.50, stdev= 0.71, samples=2 00:20:18.738 lat (msec) : 4=0.06%, 10=99.74%, 20=0.21% 00:20:18.738 cpu : usr=2.40%, sys=5.19%, ctx=901, majf=0, minf=1 00:20:18.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:18.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.738 issued rwts: total=7168,7303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.738 job3: (groupid=0, jobs=1): err= 0: pid=1847373: Tue Nov 19 05:22:34 2024 00:20:18.738 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:20:18.738 slat (usec): min=2, max=2104, avg=68.43, stdev=258.28 00:20:18.738 clat (usec): min=6814, max=9577, avg=8897.87, stdev=330.46 00:20:18.738 lat (usec): min=7257, max=10665, avg=8966.30, stdev=299.75 00:20:18.738 clat percentiles (usec): 00:20:18.738 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 8717], 00:20:18.738 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 8979], 00:20:18.738 | 70.00th=[ 9110], 80.00th=[ 9110], 90.00th=[ 9241], 95.00th=[ 9241], 00:20:18.738 | 99.00th=[ 9503], 99.50th=[ 9503], 99.90th=[ 9634], 99.95th=[ 9634], 00:20:18.738 | 99.99th=[ 9634] 00:20:18.738 write: IOPS=7441, BW=29.1MiB/s (30.5MB/s)(29.2MiB/1003msec); 0 zone resets 00:20:18.738 slat (usec): min=2, max=2015, avg=65.35, stdev=244.85 00:20:18.738 clat (usec): min=2350, max=11409, avg=8481.90, stdev=493.93 00:20:18.738 lat (usec): min=3264, max=11415, avg=8547.25, stdev=476.33 00:20:18.738 clat percentiles (usec): 00:20:18.738 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[ 8356], 00:20:18.738 | 30.00th=[ 8455], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8586], 00:20:18.738 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8848], 95.00th=[ 8848], 00:20:18.738 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11338], 99.95th=[11338], 00:20:18.738 | 99.99th=[11469] 00:20:18.738 bw ( KiB/s): min=28992, max=29644, per=22.66%, avg=29318.00, stdev=461.03, samples=2 00:20:18.738 iops : min= 7248, max= 7411, avg=7329.50, stdev=115.26, samples=2 00:20:18.738 lat (msec) : 4=0.06%, 10=99.73%, 20=0.21% 00:20:18.738 cpu : usr=2.00%, sys=5.59%, ctx=914, majf=0, minf=1 00:20:18.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:18.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.738 issued rwts: total=7168,7464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.738 00:20:18.738 Run status group 0 (all jobs): 00:20:18.738 READ: bw=124MiB/s (130MB/s), 27.9MiB/s-33.9MiB/s (29.3MB/s-35.6MB/s), io=124MiB (130MB), run=1001-1003msec 00:20:18.738 WRITE: bw=126MiB/s (132MB/s), 28.4MiB/s-34.9MiB/s (29.8MB/s-36.6MB/s), io=127MiB (133MB), run=1001-1003msec 00:20:18.738 00:20:18.738 Disk stats (read/write): 00:20:18.738 nvme0n1: ios=7217/7550, merge=0/0, ticks=26169/25920, in_queue=52089, util=84.47% 00:20:18.738 nvme0n2: ios=7168/7353, merge=0/0, ticks=13405/12866, in_queue=26271, util=85.41% 00:20:18.738 nvme0n3: ios=5920/6144, merge=0/0, ticks=26154/25645, in_queue=51799, util=88.48% 00:20:18.738 nvme0n4: ios=6024/6144, merge=0/0, ticks=26318/25554, in_queue=51872, util=89.52% 00:20:18.738 05:22:34 -- target/fio.sh@55 -- # sync 00:20:18.738 05:22:34 -- target/fio.sh@59 -- # fio_pid=1847637 00:20:18.738 05:22:34 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:18.738 05:22:34 -- target/fio.sh@61 -- # sleep 3 00:20:18.738 [global] 00:20:18.738 thread=1 00:20:18.738 invalidate=1 00:20:18.738 rw=read 00:20:18.738 time_based=1 00:20:18.738 runtime=10 00:20:18.738 ioengine=libaio 00:20:18.738 direct=1 00:20:18.738 bs=4096 00:20:18.738 iodepth=1 00:20:18.738 norandommap=1 00:20:18.738 numjobs=1 00:20:18.738 00:20:18.738 [job0] 00:20:18.738 filename=/dev/nvme0n1 00:20:18.738 [job1] 00:20:18.738 filename=/dev/nvme0n2 00:20:18.738 [job2] 00:20:18.738 filename=/dev/nvme0n3 00:20:18.738 [job3] 00:20:18.738 filename=/dev/nvme0n4 00:20:18.738 Could not set queue depth (nvme0n1) 00:20:18.738 Could not set queue depth (nvme0n2) 00:20:18.738 Could not set queue depth (nvme0n3) 00:20:18.738 Could not set queue depth (nvme0n4) 00:20:18.995 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.995 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.995 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.995 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.995 fio-3.35 00:20:18.995 Starting 4 threads 00:20:21.517 05:22:37 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:21.774 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=101449728, buflen=4096 00:20:21.774 fio: pid=1847818, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:21.774 05:22:38 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:21.774 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=81756160, buflen=4096 00:20:21.774 fio: pid=1847814, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:21.774 05:22:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:21.774 05:22:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:22.032 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=22102016, buflen=4096 00:20:22.032 fio: pid=1847795, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:22.032 05:22:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.032 05:22:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:22.290 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=35631104, buflen=4096 00:20:22.290 fio: pid=1847802, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:22.290 05:22:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.290 05:22:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:22.290 00:20:22.290 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1847795: Tue Nov 19 05:22:38 2024 00:20:22.290 read: IOPS=7207, BW=28.2MiB/s (29.5MB/s)(85.1MiB/3022msec) 00:20:22.290 slat (usec): min=8, max=23665, avg=12.32, stdev=228.85 00:20:22.290 clat (usec): min=49, max=215, avg=123.98, stdev=25.30 00:20:22.290 lat (usec): min=58, max=23739, avg=136.30, stdev=229.89 00:20:22.290 clat percentiles (usec): 00:20:22.290 | 1.00th=[ 61], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 113], 00:20:22.290 | 30.00th=[ 118], 40.00th=[ 122], 50.00th=[ 127], 60.00th=[ 135], 00:20:22.291 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:20:22.291 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 202], 00:20:22.291 | 99.99th=[ 215] 00:20:22.291 bw ( KiB/s): min=26187, max=30456, per=24.51%, avg=27763.80, stdev=1760.74, samples=5 00:20:22.291 iops : min= 6546, max= 7614, avg=6940.80, stdev=440.35, samples=5 00:20:22.291 lat (usec) : 50=0.01%, 100=14.87%, 250=85.12% 00:20:22.291 cpu : usr=3.38%, sys=10.26%, ctx=21786, majf=0, minf=1 00:20:22.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 issued rwts: total=21781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.291 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1847802: Tue Nov 19 05:22:38 2024 00:20:22.291 read: IOPS=7753, BW=30.3MiB/s (31.8MB/s)(98.0MiB/3235msec) 00:20:22.291 slat (usec): min=8, max=16892, avg=12.16, stdev=210.98 00:20:22.291 clat (usec): min=40, max=228, avg=115.14, stdev=32.09 00:20:22.291 lat (usec): min=58, max=16959, avg=127.31, stdev=213.00 00:20:22.291 clat percentiles (usec): 00:20:22.291 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 75], 00:20:22.291 | 30.00th=[ 113], 40.00th=[ 119], 50.00th=[ 123], 60.00th=[ 131], 00:20:22.291 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:20:22.291 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 196], 00:20:22.291 | 99.99th=[ 210] 00:20:22.291 bw ( KiB/s): min=26320, max=38154, per=26.21%, avg=29684.33, stdev=4410.27, samples=6 00:20:22.291 iops : min= 6580, max= 9538, avg=7421.00, stdev=1102.38, samples=6 00:20:22.291 lat (usec) : 50=0.05%, 100=26.73%, 250=73.22% 00:20:22.291 cpu : usr=3.68%, sys=10.95%, ctx=25090, majf=0, minf=2 00:20:22.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 issued rwts: total=25084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.291 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1847814: Tue Nov 19 05:22:38 2024 00:20:22.291 read: IOPS=7026, BW=27.4MiB/s (28.8MB/s)(78.0MiB/2841msec) 00:20:22.291 slat (usec): min=8, max=15958, avg=10.24, stdev=125.41 00:20:22.291 clat (usec): min=61, max=219, avg=129.53, stdev=18.20 00:20:22.291 lat (usec): min=70, max=16075, avg=139.77, stdev=126.63 00:20:22.291 clat percentiles (usec): 00:20:22.291 | 1.00th=[ 79], 5.00th=[ 98], 10.00th=[ 112], 20.00th=[ 117], 00:20:22.291 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 131], 60.00th=[ 137], 00:20:22.291 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:20:22.291 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 204], 00:20:22.291 | 99.99th=[ 219] 00:20:22.291 bw ( KiB/s): min=26256, max=30416, per=24.67%, avg=27937.60, stdev=1699.67, samples=5 00:20:22.291 iops : min= 6564, max= 7604, avg=6984.40, stdev=424.92, samples=5 00:20:22.291 lat (usec) : 100=5.66%, 250=94.34% 00:20:22.291 cpu : usr=2.78%, sys=10.49%, ctx=19965, majf=0, minf=2 00:20:22.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 issued rwts: total=19961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.291 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1847818: Tue Nov 19 05:22:38 2024 00:20:22.291 read: IOPS=9350, BW=36.5MiB/s (38.3MB/s)(96.8MiB/2649msec) 00:20:22.291 slat (nsec): min=8293, max=43277, avg=8951.07, stdev=1026.90 00:20:22.291 clat (usec): min=59, max=170, avg=95.45, stdev=16.62 00:20:22.291 lat (usec): min=77, max=179, avg=104.40, stdev=16.73 00:20:22.291 clat percentiles (usec): 00:20:22.291 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:20:22.291 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 92], 00:20:22.291 | 70.00th=[ 110], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 124], 00:20:22.291 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 145], 99.95th=[ 151], 00:20:22.291 | 99.99th=[ 159] 00:20:22.291 bw ( KiB/s): min=32104, max=42592, per=33.71%, avg=38171.20, stdev=4043.21, samples=5 00:20:22.291 iops : min= 8026, max=10648, avg=9542.80, stdev=1010.80, samples=5 00:20:22.291 lat (usec) : 100=68.53%, 250=31.47% 00:20:22.291 cpu : usr=4.80%, sys=12.76%, ctx=24769, majf=0, minf=2 00:20:22.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.291 issued rwts: total=24769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.291 00:20:22.291 Run status group 0 (all jobs): 00:20:22.291 READ: bw=111MiB/s (116MB/s), 27.4MiB/s-36.5MiB/s (28.8MB/s-38.3MB/s), io=358MiB (375MB), run=2649-3235msec 00:20:22.291 00:20:22.291 Disk stats (read/write): 00:20:22.291 nvme0n1: ios=20070/0, merge=0/0, ticks=2369/0, in_queue=2369, util=93.49% 00:20:22.291 nvme0n2: ios=23035/0, merge=0/0, ticks=2552/0, in_queue=2552, util=93.31% 00:20:22.291 nvme0n3: ios=19961/0, merge=0/0, ticks=2383/0, in_queue=2383, util=95.21% 00:20:22.291 nvme0n4: ios=24485/0, merge=0/0, ticks=2119/0, in_queue=2119, util=96.42% 00:20:22.548 05:22:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.548 05:22:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:22.808 05:22:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.808 05:22:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:22.808 05:22:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.808 05:22:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:23.070 05:22:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:23.070 05:22:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:23.327 05:22:39 -- target/fio.sh@69 -- # fio_status=0 00:20:23.327 05:22:39 -- target/fio.sh@70 -- # wait 1847637 00:20:23.327 05:22:39 -- target/fio.sh@70 -- # fio_status=4 00:20:23.327 05:22:39 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:24.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:24.257 05:22:40 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:24.257 05:22:40 -- common/autotest_common.sh@1208 -- # local i=0 00:20:24.257 05:22:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:24.257 05:22:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:24.257 05:22:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:24.257 05:22:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:24.257 05:22:40 -- common/autotest_common.sh@1220 -- # return 0 00:20:24.257 05:22:40 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:24.257 05:22:40 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:24.257 nvmf hotplug test: fio failed as expected 00:20:24.257 05:22:40 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.515 05:22:40 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:24.515 05:22:40 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:24.515 05:22:40 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:24.515 05:22:40 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:24.515 05:22:40 -- target/fio.sh@91 -- # nvmftestfini 00:20:24.515 05:22:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:24.515 05:22:40 -- nvmf/common.sh@116 -- # sync 00:20:24.515 05:22:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:24.515 05:22:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:24.515 05:22:40 -- nvmf/common.sh@119 -- # set +e 00:20:24.515 05:22:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:24.515 05:22:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:24.515 rmmod nvme_rdma 00:20:24.515 rmmod nvme_fabrics 00:20:24.515 05:22:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:24.515 05:22:40 -- nvmf/common.sh@123 -- # set -e 00:20:24.515 05:22:40 -- nvmf/common.sh@124 -- # return 0 00:20:24.515 05:22:40 -- nvmf/common.sh@477 -- # '[' -n 1844523 ']' 00:20:24.515 05:22:40 -- nvmf/common.sh@478 -- # killprocess 1844523 00:20:24.515 05:22:40 -- common/autotest_common.sh@936 -- # '[' -z 1844523 ']' 00:20:24.515 05:22:40 -- common/autotest_common.sh@940 -- # kill -0 1844523 00:20:24.515 05:22:40 -- common/autotest_common.sh@941 -- # uname 00:20:24.515 05:22:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:24.515 05:22:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1844523 00:20:24.515 05:22:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:24.515 05:22:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:24.515 05:22:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1844523' 00:20:24.515 killing process with pid 1844523 00:20:24.515 05:22:41 -- common/autotest_common.sh@955 -- # kill 1844523 00:20:24.515 05:22:41 -- common/autotest_common.sh@960 -- # wait 1844523 00:20:24.774 05:22:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:24.774 05:22:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:24.774 00:20:24.774 real 0m26.807s 00:20:24.774 user 2m7.946s 00:20:24.774 sys 0m10.247s 00:20:24.774 05:22:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:24.774 05:22:41 -- common/autotest_common.sh@10 -- # set +x 00:20:24.774 ************************************ 00:20:24.774 END TEST nvmf_fio_target 00:20:24.774 ************************************ 00:20:25.032 05:22:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:25.032 05:22:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:25.032 05:22:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:25.032 05:22:41 -- common/autotest_common.sh@10 -- # set +x 00:20:25.032 ************************************ 00:20:25.032 START TEST nvmf_bdevio 00:20:25.032 ************************************ 00:20:25.032 05:22:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:25.032 * Looking for test storage... 00:20:25.032 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:25.032 05:22:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:25.032 05:22:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:25.032 05:22:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:25.033 05:22:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:25.033 05:22:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:25.033 05:22:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:25.033 05:22:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:25.033 05:22:41 -- scripts/common.sh@335 -- # IFS=.-: 00:20:25.033 05:22:41 -- scripts/common.sh@335 -- # read -ra ver1 00:20:25.033 05:22:41 -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.033 05:22:41 -- scripts/common.sh@336 -- # read -ra ver2 00:20:25.033 05:22:41 -- scripts/common.sh@337 -- # local 'op=<' 00:20:25.033 05:22:41 -- scripts/common.sh@339 -- # ver1_l=2 00:20:25.033 05:22:41 -- scripts/common.sh@340 -- # ver2_l=1 00:20:25.033 05:22:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:25.033 05:22:41 -- scripts/common.sh@343 -- # case "$op" in 00:20:25.033 05:22:41 -- scripts/common.sh@344 -- # : 1 00:20:25.033 05:22:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:25.033 05:22:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.033 05:22:41 -- scripts/common.sh@364 -- # decimal 1 00:20:25.033 05:22:41 -- scripts/common.sh@352 -- # local d=1 00:20:25.033 05:22:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.033 05:22:41 -- scripts/common.sh@354 -- # echo 1 00:20:25.033 05:22:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:25.033 05:22:41 -- scripts/common.sh@365 -- # decimal 2 00:20:25.033 05:22:41 -- scripts/common.sh@352 -- # local d=2 00:20:25.033 05:22:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.033 05:22:41 -- scripts/common.sh@354 -- # echo 2 00:20:25.033 05:22:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:25.033 05:22:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:25.033 05:22:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:25.033 05:22:41 -- scripts/common.sh@367 -- # return 0 00:20:25.033 05:22:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.033 05:22:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:25.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.033 --rc genhtml_branch_coverage=1 00:20:25.033 --rc genhtml_function_coverage=1 00:20:25.033 --rc genhtml_legend=1 00:20:25.033 --rc geninfo_all_blocks=1 00:20:25.033 --rc geninfo_unexecuted_blocks=1 00:20:25.033 00:20:25.033 ' 00:20:25.033 05:22:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:25.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.033 --rc genhtml_branch_coverage=1 00:20:25.033 --rc genhtml_function_coverage=1 00:20:25.033 --rc genhtml_legend=1 00:20:25.033 --rc geninfo_all_blocks=1 00:20:25.033 --rc geninfo_unexecuted_blocks=1 00:20:25.033 00:20:25.033 ' 00:20:25.033 05:22:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:25.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.033 --rc genhtml_branch_coverage=1 00:20:25.033 --rc genhtml_function_coverage=1 00:20:25.033 --rc genhtml_legend=1 00:20:25.033 --rc geninfo_all_blocks=1 00:20:25.033 --rc geninfo_unexecuted_blocks=1 00:20:25.033 00:20:25.033 ' 00:20:25.033 05:22:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:25.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.033 --rc genhtml_branch_coverage=1 00:20:25.033 --rc genhtml_function_coverage=1 00:20:25.033 --rc genhtml_legend=1 00:20:25.033 --rc geninfo_all_blocks=1 00:20:25.033 --rc geninfo_unexecuted_blocks=1 00:20:25.033 00:20:25.033 ' 00:20:25.033 05:22:41 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.033 05:22:41 -- nvmf/common.sh@7 -- # uname -s 00:20:25.033 05:22:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.033 05:22:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.033 05:22:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.033 05:22:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.033 05:22:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.033 05:22:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.033 05:22:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.033 05:22:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.033 05:22:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.033 05:22:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.033 05:22:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:25.033 05:22:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:25.033 05:22:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.033 05:22:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.033 05:22:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.033 05:22:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:25.033 05:22:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.033 05:22:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.033 05:22:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.033 05:22:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.033 05:22:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.033 05:22:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.033 05:22:41 -- paths/export.sh@5 -- # export PATH 00:20:25.033 05:22:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.033 05:22:41 -- nvmf/common.sh@46 -- # : 0 00:20:25.033 05:22:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:25.033 05:22:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:25.033 05:22:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:25.033 05:22:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.033 05:22:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.033 05:22:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:25.033 05:22:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:25.033 05:22:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:25.033 05:22:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:25.033 05:22:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:25.033 05:22:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:25.033 05:22:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:25.033 05:22:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.033 05:22:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:25.033 05:22:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:25.033 05:22:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:25.033 05:22:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.033 05:22:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.033 05:22:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.033 05:22:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:25.033 05:22:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:25.033 05:22:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:25.033 05:22:41 -- common/autotest_common.sh@10 -- # set +x 00:20:31.587 05:22:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:31.587 05:22:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:31.587 05:22:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:31.587 05:22:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:31.587 05:22:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:31.587 05:22:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:31.587 05:22:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:31.587 05:22:48 -- nvmf/common.sh@294 -- # net_devs=() 00:20:31.587 05:22:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:31.587 05:22:48 -- nvmf/common.sh@295 -- # e810=() 00:20:31.587 05:22:48 -- nvmf/common.sh@295 -- # local -ga e810 00:20:31.587 05:22:48 -- nvmf/common.sh@296 -- # x722=() 00:20:31.587 05:22:48 -- nvmf/common.sh@296 -- # local -ga x722 00:20:31.587 05:22:48 -- nvmf/common.sh@297 -- # mlx=() 00:20:31.587 05:22:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:31.587 05:22:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.587 05:22:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:31.587 05:22:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:31.587 05:22:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:31.587 05:22:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:31.587 05:22:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:31.587 05:22:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:31.587 05:22:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:31.587 05:22:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:31.587 05:22:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:31.587 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:31.588 05:22:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:31.588 05:22:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:31.588 05:22:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:31.588 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:31.588 05:22:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:31.588 05:22:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:31.588 05:22:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:31.588 05:22:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.588 05:22:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:31.588 05:22:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.588 05:22:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:31.588 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:31.588 05:22:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.588 05:22:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:31.588 05:22:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.588 05:22:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:31.588 05:22:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.588 05:22:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:31.588 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:31.588 05:22:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.588 05:22:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:31.588 05:22:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:31.588 05:22:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:31.588 05:22:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:31.588 05:22:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:31.588 05:22:48 -- nvmf/common.sh@57 -- # uname 00:20:31.588 05:22:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:31.588 05:22:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:31.588 05:22:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:31.588 05:22:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:31.588 05:22:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:31.588 05:22:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:31.588 05:22:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:31.588 05:22:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:31.588 05:22:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:31.588 05:22:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:31.846 05:22:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:31.846 05:22:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:31.846 05:22:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:31.846 05:22:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:31.846 05:22:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:31.846 05:22:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:31.846 05:22:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@104 -- # continue 2 00:20:31.846 05:22:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@104 -- # continue 2 00:20:31.846 05:22:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:31.846 05:22:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:31.846 05:22:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:31.846 05:22:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:31.846 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:31.846 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:31.846 altname enp217s0f0np0 00:20:31.846 altname ens818f0np0 00:20:31.846 inet 192.168.100.8/24 scope global mlx_0_0 00:20:31.846 valid_lft forever preferred_lft forever 00:20:31.846 05:22:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:31.846 05:22:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:31.846 05:22:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:31.846 05:22:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:31.846 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:31.846 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:31.846 altname enp217s0f1np1 00:20:31.846 altname ens818f1np1 00:20:31.846 inet 192.168.100.9/24 scope global mlx_0_1 00:20:31.846 valid_lft forever preferred_lft forever 00:20:31.846 05:22:48 -- nvmf/common.sh@410 -- # return 0 00:20:31.846 05:22:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:31.846 05:22:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:31.846 05:22:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:31.846 05:22:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:31.846 05:22:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:31.846 05:22:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:31.846 05:22:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:31.846 05:22:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:31.846 05:22:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:31.846 05:22:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@104 -- # continue 2 00:20:31.846 05:22:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.846 05:22:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:31.846 05:22:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@104 -- # continue 2 00:20:31.846 05:22:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:31.846 05:22:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:31.846 05:22:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:31.846 05:22:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:31.846 05:22:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:31.846 05:22:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:31.846 192.168.100.9' 00:20:31.846 05:22:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:31.846 192.168.100.9' 00:20:31.846 05:22:48 -- nvmf/common.sh@445 -- # head -n 1 00:20:31.846 05:22:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:31.846 05:22:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:31.846 192.168.100.9' 00:20:31.846 05:22:48 -- nvmf/common.sh@446 -- # tail -n +2 00:20:31.846 05:22:48 -- nvmf/common.sh@446 -- # head -n 1 00:20:31.846 05:22:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:31.846 05:22:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:31.846 05:22:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:31.846 05:22:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:31.846 05:22:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:31.846 05:22:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:31.846 05:22:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:31.846 05:22:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:31.846 05:22:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.846 05:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:31.846 05:22:48 -- nvmf/common.sh@469 -- # nvmfpid=1852203 00:20:31.846 05:22:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:31.846 05:22:48 -- nvmf/common.sh@470 -- # waitforlisten 1852203 00:20:31.846 05:22:48 -- common/autotest_common.sh@829 -- # '[' -z 1852203 ']' 00:20:31.846 05:22:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.846 05:22:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.846 05:22:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.846 05:22:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.846 05:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:31.846 [2024-11-19 05:22:48.386549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:31.846 [2024-11-19 05:22:48.386599] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.104 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.104 [2024-11-19 05:22:48.456394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.104 [2024-11-19 05:22:48.493475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:32.104 [2024-11-19 05:22:48.493605] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.104 [2024-11-19 05:22:48.493616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.104 [2024-11-19 05:22:48.493624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.104 [2024-11-19 05:22:48.493759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:32.104 [2024-11-19 05:22:48.493867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:32.104 [2024-11-19 05:22:48.493974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.104 [2024-11-19 05:22:48.493976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:32.668 05:22:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.668 05:22:49 -- common/autotest_common.sh@862 -- # return 0 00:20:32.668 05:22:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:32.668 05:22:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:32.668 05:22:49 -- common/autotest_common.sh@10 -- # set +x 00:20:32.925 05:22:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.925 05:22:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:32.925 05:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.925 05:22:49 -- common/autotest_common.sh@10 -- # set +x 00:20:32.925 [2024-11-19 05:22:49.276402] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1813ae0/0x1817fd0) succeed. 00:20:32.925 [2024-11-19 05:22:49.285707] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18150d0/0x1859670) succeed. 00:20:32.925 05:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.925 05:22:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:32.925 05:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.925 05:22:49 -- common/autotest_common.sh@10 -- # set +x 00:20:32.925 Malloc0 00:20:32.925 05:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.925 05:22:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.925 05:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.925 05:22:49 -- common/autotest_common.sh@10 -- # set +x 00:20:32.925 05:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.925 05:22:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:32.925 05:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.925 05:22:49 -- common/autotest_common.sh@10 -- # set +x 00:20:32.925 05:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.925 05:22:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:32.925 05:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.925 05:22:49 -- common/autotest_common.sh@10 -- # set +x 00:20:32.925 [2024-11-19 05:22:49.456936] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:32.925 05:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.925 05:22:49 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:32.925 05:22:49 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:32.925 05:22:49 -- nvmf/common.sh@520 -- # config=() 00:20:32.925 05:22:49 -- nvmf/common.sh@520 -- # local subsystem config 00:20:32.925 05:22:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:32.925 05:22:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:32.925 { 00:20:32.925 "params": { 00:20:32.925 "name": "Nvme$subsystem", 00:20:32.925 "trtype": "$TEST_TRANSPORT", 00:20:32.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.925 "adrfam": "ipv4", 00:20:32.925 "trsvcid": "$NVMF_PORT", 00:20:32.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.925 "hdgst": ${hdgst:-false}, 00:20:32.925 "ddgst": ${ddgst:-false} 00:20:32.925 }, 00:20:32.925 "method": "bdev_nvme_attach_controller" 00:20:32.925 } 00:20:32.925 EOF 00:20:32.925 )") 00:20:32.925 05:22:49 -- nvmf/common.sh@542 -- # cat 00:20:32.925 05:22:49 -- nvmf/common.sh@544 -- # jq . 00:20:32.925 05:22:49 -- nvmf/common.sh@545 -- # IFS=, 00:20:32.925 05:22:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:32.925 "params": { 00:20:32.925 "name": "Nvme1", 00:20:32.925 "trtype": "rdma", 00:20:32.925 "traddr": "192.168.100.8", 00:20:32.925 "adrfam": "ipv4", 00:20:32.925 "trsvcid": "4420", 00:20:32.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.925 "hdgst": false, 00:20:32.925 "ddgst": false 00:20:32.925 }, 00:20:32.925 "method": "bdev_nvme_attach_controller" 00:20:32.925 }' 00:20:33.182 [2024-11-19 05:22:49.504369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:33.182 [2024-11-19 05:22:49.504420] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852373 ] 00:20:33.182 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.182 [2024-11-19 05:22:49.575153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:33.182 [2024-11-19 05:22:49.613586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.182 [2024-11-19 05:22:49.613679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.182 [2024-11-19 05:22:49.613681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.440 [2024-11-19 05:22:49.778353] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:33.440 [2024-11-19 05:22:49.778385] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:33.440 I/O targets: 00:20:33.440 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:33.440 00:20:33.440 00:20:33.440 CUnit - A unit testing framework for C - Version 2.1-3 00:20:33.440 http://cunit.sourceforge.net/ 00:20:33.440 00:20:33.440 00:20:33.440 Suite: bdevio tests on: Nvme1n1 00:20:33.440 Test: blockdev write read block ...passed 00:20:33.440 Test: blockdev write zeroes read block ...passed 00:20:33.440 Test: blockdev write zeroes read no split ...passed 00:20:33.440 Test: blockdev write zeroes read split ...passed 00:20:33.440 Test: blockdev write zeroes read split partial ...passed 00:20:33.440 Test: blockdev reset ...[2024-11-19 05:22:49.808172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:33.440 [2024-11-19 05:22:49.830811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:33.440 [2024-11-19 05:22:49.857549] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.440 passed 00:20:33.440 Test: blockdev write read 8 blocks ...passed 00:20:33.440 Test: blockdev write read size > 128k ...passed 00:20:33.440 Test: blockdev write read invalid size ...passed 00:20:33.440 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.440 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.440 Test: blockdev write read max offset ...passed 00:20:33.440 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.440 Test: blockdev writev readv 8 blocks ...passed 00:20:33.440 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.440 Test: blockdev writev readv block ...passed 00:20:33.440 Test: blockdev writev readv size > 128k ...passed 00:20:33.440 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.440 Test: blockdev comparev and writev ...[2024-11-19 05:22:49.860412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.860440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.860452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.860462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.860647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.860659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.860669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.860678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.860833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.860843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.860853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.860862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.861009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.861022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.861033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.440 [2024-11-19 05:22:49.861041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:33.440 passed 00:20:33.440 Test: blockdev nvme passthru rw ...passed 00:20:33.440 Test: blockdev nvme passthru vendor specific ...[2024-11-19 05:22:49.861286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.440 [2024-11-19 05:22:49.861298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.861338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.440 [2024-11-19 05:22:49.861348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.861387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.440 [2024-11-19 05:22:49.861397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:33.440 [2024-11-19 05:22:49.861435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.440 [2024-11-19 05:22:49.861446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:33.440 passed 00:20:33.440 Test: blockdev nvme admin passthru ...passed 00:20:33.440 Test: blockdev copy ...passed 00:20:33.440 00:20:33.440 Run Summary: Type Total Ran Passed Failed Inactive 00:20:33.440 suites 1 1 n/a 0 0 00:20:33.440 tests 23 23 23 0 0 00:20:33.440 asserts 152 152 152 0 n/a 00:20:33.440 00:20:33.440 Elapsed time = 0.169 seconds 00:20:33.698 05:22:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.698 05:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.698 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:20:33.698 05:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.698 05:22:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:33.698 05:22:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:33.698 05:22:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:33.698 05:22:50 -- nvmf/common.sh@116 -- # sync 00:20:33.698 05:22:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:33.698 05:22:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:33.698 05:22:50 -- nvmf/common.sh@119 -- # set +e 00:20:33.698 05:22:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:33.698 05:22:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:33.698 rmmod nvme_rdma 00:20:33.698 rmmod nvme_fabrics 00:20:33.698 05:22:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:33.698 05:22:50 -- nvmf/common.sh@123 -- # set -e 00:20:33.698 05:22:50 -- nvmf/common.sh@124 -- # return 0 00:20:33.698 05:22:50 -- nvmf/common.sh@477 -- # '[' -n 1852203 ']' 00:20:33.698 05:22:50 -- nvmf/common.sh@478 -- # killprocess 1852203 00:20:33.698 05:22:50 -- common/autotest_common.sh@936 -- # '[' -z 1852203 ']' 00:20:33.698 05:22:50 -- common/autotest_common.sh@940 -- # kill -0 1852203 00:20:33.698 05:22:50 -- common/autotest_common.sh@941 -- # uname 00:20:33.698 05:22:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.698 05:22:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1852203 00:20:33.698 05:22:50 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:33.698 05:22:50 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:33.698 05:22:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1852203' 00:20:33.698 killing process with pid 1852203 00:20:33.698 05:22:50 -- common/autotest_common.sh@955 -- # kill 1852203 00:20:33.698 05:22:50 -- common/autotest_common.sh@960 -- # wait 1852203 00:20:33.956 05:22:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:33.956 05:22:50 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:33.956 00:20:33.956 real 0m9.094s 00:20:33.956 user 0m10.606s 00:20:33.956 sys 0m5.879s 00:20:33.956 05:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:33.956 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:20:33.956 ************************************ 00:20:33.956 END TEST nvmf_bdevio 00:20:33.956 ************************************ 00:20:33.956 05:22:50 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:33.956 05:22:50 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:33.956 05:22:50 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:33.956 05:22:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:33.956 05:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.956 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:20:33.956 ************************************ 00:20:33.956 START TEST nvmf_fuzz 00:20:33.956 ************************************ 00:20:33.956 05:22:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:34.214 * Looking for test storage... 00:20:34.214 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:34.214 05:22:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:34.214 05:22:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:34.214 05:22:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:34.214 05:22:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:34.214 05:22:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:34.214 05:22:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:34.214 05:22:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:34.214 05:22:50 -- scripts/common.sh@335 -- # IFS=.-: 00:20:34.214 05:22:50 -- scripts/common.sh@335 -- # read -ra ver1 00:20:34.214 05:22:50 -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.214 05:22:50 -- scripts/common.sh@336 -- # read -ra ver2 00:20:34.214 05:22:50 -- scripts/common.sh@337 -- # local 'op=<' 00:20:34.214 05:22:50 -- scripts/common.sh@339 -- # ver1_l=2 00:20:34.214 05:22:50 -- scripts/common.sh@340 -- # ver2_l=1 00:20:34.214 05:22:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:34.214 05:22:50 -- scripts/common.sh@343 -- # case "$op" in 00:20:34.214 05:22:50 -- scripts/common.sh@344 -- # : 1 00:20:34.214 05:22:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:34.214 05:22:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.214 05:22:50 -- scripts/common.sh@364 -- # decimal 1 00:20:34.214 05:22:50 -- scripts/common.sh@352 -- # local d=1 00:20:34.214 05:22:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.214 05:22:50 -- scripts/common.sh@354 -- # echo 1 00:20:34.214 05:22:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:34.214 05:22:50 -- scripts/common.sh@365 -- # decimal 2 00:20:34.214 05:22:50 -- scripts/common.sh@352 -- # local d=2 00:20:34.214 05:22:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.214 05:22:50 -- scripts/common.sh@354 -- # echo 2 00:20:34.214 05:22:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:34.214 05:22:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:34.214 05:22:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:34.214 05:22:50 -- scripts/common.sh@367 -- # return 0 00:20:34.214 05:22:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.215 05:22:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:34.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.215 --rc genhtml_branch_coverage=1 00:20:34.215 --rc genhtml_function_coverage=1 00:20:34.215 --rc genhtml_legend=1 00:20:34.215 --rc geninfo_all_blocks=1 00:20:34.215 --rc geninfo_unexecuted_blocks=1 00:20:34.215 00:20:34.215 ' 00:20:34.215 05:22:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:34.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.215 --rc genhtml_branch_coverage=1 00:20:34.215 --rc genhtml_function_coverage=1 00:20:34.215 --rc genhtml_legend=1 00:20:34.215 --rc geninfo_all_blocks=1 00:20:34.215 --rc geninfo_unexecuted_blocks=1 00:20:34.215 00:20:34.215 ' 00:20:34.215 05:22:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:34.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.215 --rc genhtml_branch_coverage=1 00:20:34.215 --rc genhtml_function_coverage=1 00:20:34.215 --rc genhtml_legend=1 00:20:34.215 --rc geninfo_all_blocks=1 00:20:34.215 --rc geninfo_unexecuted_blocks=1 00:20:34.215 00:20:34.215 ' 00:20:34.215 05:22:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:34.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.215 --rc genhtml_branch_coverage=1 00:20:34.215 --rc genhtml_function_coverage=1 00:20:34.215 --rc genhtml_legend=1 00:20:34.215 --rc geninfo_all_blocks=1 00:20:34.215 --rc geninfo_unexecuted_blocks=1 00:20:34.215 00:20:34.215 ' 00:20:34.215 05:22:50 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.215 05:22:50 -- nvmf/common.sh@7 -- # uname -s 00:20:34.215 05:22:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.215 05:22:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.215 05:22:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.215 05:22:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.215 05:22:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.215 05:22:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.215 05:22:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.215 05:22:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.215 05:22:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.215 05:22:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.215 05:22:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:34.215 05:22:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:34.215 05:22:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.215 05:22:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.215 05:22:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.215 05:22:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:34.215 05:22:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.215 05:22:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.215 05:22:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.215 05:22:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 05:22:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 05:22:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 05:22:50 -- paths/export.sh@5 -- # export PATH 00:20:34.215 05:22:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 05:22:50 -- nvmf/common.sh@46 -- # : 0 00:20:34.215 05:22:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:34.215 05:22:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:34.215 05:22:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:34.215 05:22:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.215 05:22:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.215 05:22:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:34.215 05:22:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:34.215 05:22:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:34.215 05:22:50 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:34.215 05:22:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:34.215 05:22:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.215 05:22:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:34.215 05:22:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:34.215 05:22:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:34.215 05:22:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.215 05:22:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.215 05:22:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.215 05:22:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:34.215 05:22:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:34.215 05:22:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:34.215 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:20:40.774 05:22:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:40.774 05:22:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:40.774 05:22:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:40.774 05:22:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:40.774 05:22:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:40.774 05:22:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:40.774 05:22:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:40.774 05:22:56 -- nvmf/common.sh@294 -- # net_devs=() 00:20:40.774 05:22:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:40.774 05:22:56 -- nvmf/common.sh@295 -- # e810=() 00:20:40.774 05:22:56 -- nvmf/common.sh@295 -- # local -ga e810 00:20:40.774 05:22:56 -- nvmf/common.sh@296 -- # x722=() 00:20:40.774 05:22:56 -- nvmf/common.sh@296 -- # local -ga x722 00:20:40.774 05:22:56 -- nvmf/common.sh@297 -- # mlx=() 00:20:40.774 05:22:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:40.774 05:22:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.774 05:22:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:40.774 05:22:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:40.774 05:22:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:40.774 05:22:56 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:40.774 05:22:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:40.774 05:22:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:40.774 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:40.774 05:22:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.774 05:22:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:40.774 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:40.774 05:22:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.774 05:22:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:40.774 05:22:56 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.774 05:22:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.774 05:22:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.774 05:22:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:40.774 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:40.774 05:22:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.774 05:22:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.774 05:22:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.774 05:22:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.774 05:22:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:40.774 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:40.774 05:22:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.774 05:22:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:40.774 05:22:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:40.774 05:22:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:40.774 05:22:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:40.774 05:22:56 -- nvmf/common.sh@57 -- # uname 00:20:40.774 05:22:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:40.774 05:22:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:40.774 05:22:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:40.774 05:22:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:40.774 05:22:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:40.774 05:22:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:40.774 05:22:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:40.774 05:22:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:40.774 05:22:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:40.774 05:22:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:40.774 05:22:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:40.774 05:22:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.774 05:22:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:40.774 05:22:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:40.774 05:22:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.774 05:22:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:40.774 05:22:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:40.774 05:22:56 -- nvmf/common.sh@104 -- # continue 2 00:20:40.774 05:22:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.774 05:22:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:40.774 05:22:56 -- nvmf/common.sh@104 -- # continue 2 00:20:40.774 05:22:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:40.774 05:22:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:40.774 05:22:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:40.774 05:22:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:40.774 05:22:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.774 05:22:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.774 05:22:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:40.774 05:22:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:40.774 05:22:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:40.774 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.774 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:40.774 altname enp217s0f0np0 00:20:40.774 altname ens818f0np0 00:20:40.774 inet 192.168.100.8/24 scope global mlx_0_0 00:20:40.774 valid_lft forever preferred_lft forever 00:20:40.774 05:22:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:40.774 05:22:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:40.775 05:22:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.775 05:22:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:40.775 05:22:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:40.775 05:22:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:40.775 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.775 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:40.775 altname enp217s0f1np1 00:20:40.775 altname ens818f1np1 00:20:40.775 inet 192.168.100.9/24 scope global mlx_0_1 00:20:40.775 valid_lft forever preferred_lft forever 00:20:40.775 05:22:56 -- nvmf/common.sh@410 -- # return 0 00:20:40.775 05:22:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.775 05:22:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:40.775 05:22:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:40.775 05:22:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:40.775 05:22:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:40.775 05:22:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.775 05:22:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:40.775 05:22:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:40.775 05:22:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.775 05:22:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:40.775 05:22:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.775 05:22:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.775 05:22:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.775 05:22:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:40.775 05:22:56 -- nvmf/common.sh@104 -- # continue 2 00:20:40.775 05:22:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.775 05:22:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.775 05:22:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.775 05:22:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.775 05:22:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.775 05:22:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:40.775 05:22:56 -- nvmf/common.sh@104 -- # continue 2 00:20:40.775 05:22:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:40.775 05:22:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:40.775 05:22:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.775 05:22:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:40.775 05:22:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:40.775 05:22:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.775 05:22:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.775 05:22:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:40.775 192.168.100.9' 00:20:40.775 05:22:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:40.775 192.168.100.9' 00:20:40.775 05:22:56 -- nvmf/common.sh@445 -- # head -n 1 00:20:40.775 05:22:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:40.775 05:22:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:40.775 192.168.100.9' 00:20:40.775 05:22:56 -- nvmf/common.sh@446 -- # tail -n +2 00:20:40.775 05:22:56 -- nvmf/common.sh@446 -- # head -n 1 00:20:40.775 05:22:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:40.775 05:22:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:40.775 05:22:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:40.775 05:22:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:40.775 05:22:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:40.775 05:22:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:40.775 05:22:56 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:40.775 05:22:56 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1855814 00:20:40.775 05:22:56 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:40.775 05:22:56 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1855814 00:20:40.775 05:22:56 -- common/autotest_common.sh@829 -- # '[' -z 1855814 ']' 00:20:40.775 05:22:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.775 05:22:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.775 05:22:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.775 05:22:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.775 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:20:41.342 05:22:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.342 05:22:57 -- common/autotest_common.sh@862 -- # return 0 00:20:41.342 05:22:57 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:41.342 05:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.342 05:22:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.601 05:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.601 05:22:57 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:41.601 05:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.601 05:22:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.601 Malloc0 00:20:41.601 05:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.601 05:22:57 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.601 05:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.601 05:22:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.601 05:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.601 05:22:57 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.601 05:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.601 05:22:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.601 05:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.601 05:22:57 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:41.601 05:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.601 05:22:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.601 05:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.601 05:22:57 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:41.601 05:22:57 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:13.661 Fuzzing completed. Shutting down the fuzz application 00:21:13.661 00:21:13.661 Dumping successful admin opcodes: 00:21:13.661 8, 9, 10, 24, 00:21:13.661 Dumping successful io opcodes: 00:21:13.661 0, 9, 00:21:13.661 NS: 0x200003af1f00 I/O qp, Total commands completed: 1093373, total successful commands: 6420, random_seed: 181014400 00:21:13.661 NS: 0x200003af1f00 admin qp, Total commands completed: 138080, total successful commands: 1118, random_seed: 3685145408 00:21:13.661 05:23:28 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:13.661 Fuzzing completed. Shutting down the fuzz application 00:21:13.661 00:21:13.661 Dumping successful admin opcodes: 00:21:13.661 24, 00:21:13.661 Dumping successful io opcodes: 00:21:13.661 00:21:13.661 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4216122566 00:21:13.661 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4216202586 00:21:13.661 05:23:29 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.661 05:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.661 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:21:13.661 05:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.661 05:23:29 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:13.661 05:23:29 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:13.661 05:23:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:13.661 05:23:29 -- nvmf/common.sh@116 -- # sync 00:21:13.661 05:23:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:13.661 05:23:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:13.661 05:23:29 -- nvmf/common.sh@119 -- # set +e 00:21:13.661 05:23:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:13.661 05:23:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:13.661 rmmod nvme_rdma 00:21:13.661 rmmod nvme_fabrics 00:21:13.661 05:23:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:13.661 05:23:29 -- nvmf/common.sh@123 -- # set -e 00:21:13.661 05:23:29 -- nvmf/common.sh@124 -- # return 0 00:21:13.661 05:23:29 -- nvmf/common.sh@477 -- # '[' -n 1855814 ']' 00:21:13.661 05:23:29 -- nvmf/common.sh@478 -- # killprocess 1855814 00:21:13.661 05:23:29 -- common/autotest_common.sh@936 -- # '[' -z 1855814 ']' 00:21:13.661 05:23:29 -- common/autotest_common.sh@940 -- # kill -0 1855814 00:21:13.661 05:23:29 -- common/autotest_common.sh@941 -- # uname 00:21:13.661 05:23:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.661 05:23:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1855814 00:21:13.661 05:23:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:13.661 05:23:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.661 05:23:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1855814' 00:21:13.661 killing process with pid 1855814 00:21:13.661 05:23:29 -- common/autotest_common.sh@955 -- # kill 1855814 00:21:13.661 05:23:29 -- common/autotest_common.sh@960 -- # wait 1855814 00:21:13.661 05:23:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:13.661 05:23:29 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:13.661 05:23:29 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:13.661 00:21:13.661 real 0m39.561s 00:21:13.661 user 0m50.818s 00:21:13.661 sys 0m19.986s 00:21:13.661 05:23:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:13.661 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:21:13.661 ************************************ 00:21:13.661 END TEST nvmf_fuzz 00:21:13.661 ************************************ 00:21:13.661 05:23:30 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:13.661 05:23:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:13.661 05:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:13.661 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:21:13.661 ************************************ 00:21:13.661 START TEST nvmf_multiconnection 00:21:13.661 ************************************ 00:21:13.661 05:23:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:13.661 * Looking for test storage... 00:21:13.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:13.661 05:23:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:13.661 05:23:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:13.661 05:23:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:13.920 05:23:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:13.920 05:23:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:13.920 05:23:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:13.920 05:23:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:13.920 05:23:30 -- scripts/common.sh@335 -- # IFS=.-: 00:21:13.920 05:23:30 -- scripts/common.sh@335 -- # read -ra ver1 00:21:13.920 05:23:30 -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.920 05:23:30 -- scripts/common.sh@336 -- # read -ra ver2 00:21:13.920 05:23:30 -- scripts/common.sh@337 -- # local 'op=<' 00:21:13.920 05:23:30 -- scripts/common.sh@339 -- # ver1_l=2 00:21:13.920 05:23:30 -- scripts/common.sh@340 -- # ver2_l=1 00:21:13.920 05:23:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:13.920 05:23:30 -- scripts/common.sh@343 -- # case "$op" in 00:21:13.920 05:23:30 -- scripts/common.sh@344 -- # : 1 00:21:13.920 05:23:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:13.920 05:23:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.920 05:23:30 -- scripts/common.sh@364 -- # decimal 1 00:21:13.920 05:23:30 -- scripts/common.sh@352 -- # local d=1 00:21:13.920 05:23:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.920 05:23:30 -- scripts/common.sh@354 -- # echo 1 00:21:13.920 05:23:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:13.920 05:23:30 -- scripts/common.sh@365 -- # decimal 2 00:21:13.920 05:23:30 -- scripts/common.sh@352 -- # local d=2 00:21:13.920 05:23:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.920 05:23:30 -- scripts/common.sh@354 -- # echo 2 00:21:13.920 05:23:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:13.920 05:23:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:13.920 05:23:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:13.920 05:23:30 -- scripts/common.sh@367 -- # return 0 00:21:13.920 05:23:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.920 05:23:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:13.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.920 --rc genhtml_branch_coverage=1 00:21:13.920 --rc genhtml_function_coverage=1 00:21:13.920 --rc genhtml_legend=1 00:21:13.920 --rc geninfo_all_blocks=1 00:21:13.920 --rc geninfo_unexecuted_blocks=1 00:21:13.920 00:21:13.920 ' 00:21:13.920 05:23:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:13.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.920 --rc genhtml_branch_coverage=1 00:21:13.920 --rc genhtml_function_coverage=1 00:21:13.920 --rc genhtml_legend=1 00:21:13.920 --rc geninfo_all_blocks=1 00:21:13.920 --rc geninfo_unexecuted_blocks=1 00:21:13.920 00:21:13.920 ' 00:21:13.920 05:23:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:13.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.920 --rc genhtml_branch_coverage=1 00:21:13.920 --rc genhtml_function_coverage=1 00:21:13.920 --rc genhtml_legend=1 00:21:13.920 --rc geninfo_all_blocks=1 00:21:13.920 --rc geninfo_unexecuted_blocks=1 00:21:13.920 00:21:13.920 ' 00:21:13.921 05:23:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.921 --rc genhtml_branch_coverage=1 00:21:13.921 --rc genhtml_function_coverage=1 00:21:13.921 --rc genhtml_legend=1 00:21:13.921 --rc geninfo_all_blocks=1 00:21:13.921 --rc geninfo_unexecuted_blocks=1 00:21:13.921 00:21:13.921 ' 00:21:13.921 05:23:30 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.921 05:23:30 -- nvmf/common.sh@7 -- # uname -s 00:21:13.921 05:23:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.921 05:23:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.921 05:23:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.921 05:23:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.921 05:23:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.921 05:23:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.921 05:23:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.921 05:23:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.921 05:23:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.921 05:23:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.921 05:23:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:13.921 05:23:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:13.921 05:23:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.921 05:23:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.921 05:23:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.921 05:23:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:13.921 05:23:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.921 05:23:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.921 05:23:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.921 05:23:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.921 05:23:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.921 05:23:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.921 05:23:30 -- paths/export.sh@5 -- # export PATH 00:21:13.921 05:23:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.921 05:23:30 -- nvmf/common.sh@46 -- # : 0 00:21:13.921 05:23:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:13.921 05:23:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:13.921 05:23:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:13.921 05:23:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.921 05:23:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.921 05:23:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:13.921 05:23:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:13.921 05:23:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:13.921 05:23:30 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.921 05:23:30 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.921 05:23:30 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:13.921 05:23:30 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:13.921 05:23:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:13.921 05:23:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.921 05:23:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:13.921 05:23:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:13.921 05:23:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:13.921 05:23:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.921 05:23:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.921 05:23:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.921 05:23:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:13.921 05:23:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:13.921 05:23:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:13.921 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:21:20.570 05:23:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:20.570 05:23:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:20.570 05:23:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:20.570 05:23:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:20.570 05:23:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:20.570 05:23:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:20.570 05:23:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:20.570 05:23:36 -- nvmf/common.sh@294 -- # net_devs=() 00:21:20.570 05:23:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:20.570 05:23:36 -- nvmf/common.sh@295 -- # e810=() 00:21:20.570 05:23:36 -- nvmf/common.sh@295 -- # local -ga e810 00:21:20.570 05:23:36 -- nvmf/common.sh@296 -- # x722=() 00:21:20.570 05:23:36 -- nvmf/common.sh@296 -- # local -ga x722 00:21:20.570 05:23:36 -- nvmf/common.sh@297 -- # mlx=() 00:21:20.570 05:23:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:20.570 05:23:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.570 05:23:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:20.570 05:23:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:20.570 05:23:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:20.570 05:23:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:20.570 05:23:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:20.570 05:23:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.570 05:23:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:20.570 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:20.570 05:23:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.570 05:23:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.570 05:23:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:20.570 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:20.570 05:23:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.570 05:23:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:20.570 05:23:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:20.570 05:23:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.570 05:23:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.570 05:23:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.570 05:23:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.570 05:23:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:20.570 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:20.570 05:23:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.570 05:23:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.571 05:23:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.571 05:23:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.571 05:23:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:20.571 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.571 05:23:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:20.571 05:23:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:20.571 05:23:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:20.571 05:23:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:20.571 05:23:36 -- nvmf/common.sh@57 -- # uname 00:21:20.571 05:23:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:20.571 05:23:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:20.571 05:23:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:20.571 05:23:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:20.571 05:23:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:20.571 05:23:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:20.571 05:23:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:20.571 05:23:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:20.571 05:23:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:20.571 05:23:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:20.571 05:23:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:20.571 05:23:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.571 05:23:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:20.571 05:23:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:20.571 05:23:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.571 05:23:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:20.571 05:23:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@104 -- # continue 2 00:21:20.571 05:23:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@104 -- # continue 2 00:21:20.571 05:23:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:20.571 05:23:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.571 05:23:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:20.571 05:23:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:20.571 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.571 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:20.571 altname enp217s0f0np0 00:21:20.571 altname ens818f0np0 00:21:20.571 inet 192.168.100.8/24 scope global mlx_0_0 00:21:20.571 valid_lft forever preferred_lft forever 00:21:20.571 05:23:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:20.571 05:23:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.571 05:23:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:20.571 05:23:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:20.571 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.571 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:20.571 altname enp217s0f1np1 00:21:20.571 altname ens818f1np1 00:21:20.571 inet 192.168.100.9/24 scope global mlx_0_1 00:21:20.571 valid_lft forever preferred_lft forever 00:21:20.571 05:23:36 -- nvmf/common.sh@410 -- # return 0 00:21:20.571 05:23:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:20.571 05:23:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:20.571 05:23:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:20.571 05:23:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:20.571 05:23:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.571 05:23:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:20.571 05:23:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:20.571 05:23:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.571 05:23:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:20.571 05:23:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@104 -- # continue 2 00:21:20.571 05:23:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.571 05:23:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.571 05:23:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@104 -- # continue 2 00:21:20.571 05:23:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:20.571 05:23:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.571 05:23:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:20.571 05:23:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.571 05:23:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.571 05:23:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:20.571 192.168.100.9' 00:21:20.571 05:23:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:20.571 192.168.100.9' 00:21:20.571 05:23:36 -- nvmf/common.sh@445 -- # head -n 1 00:21:20.571 05:23:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:20.571 05:23:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:20.571 192.168.100.9' 00:21:20.571 05:23:36 -- nvmf/common.sh@446 -- # tail -n +2 00:21:20.571 05:23:36 -- nvmf/common.sh@446 -- # head -n 1 00:21:20.571 05:23:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:20.571 05:23:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:20.571 05:23:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:20.571 05:23:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:20.571 05:23:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:20.571 05:23:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:20.571 05:23:37 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:20.571 05:23:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:20.571 05:23:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.571 05:23:37 -- common/autotest_common.sh@10 -- # set +x 00:21:20.571 05:23:37 -- nvmf/common.sh@469 -- # nvmfpid=1864662 00:21:20.571 05:23:37 -- nvmf/common.sh@470 -- # waitforlisten 1864662 00:21:20.571 05:23:37 -- common/autotest_common.sh@829 -- # '[' -z 1864662 ']' 00:21:20.571 05:23:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.571 05:23:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.571 05:23:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.571 05:23:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.571 05:23:37 -- common/autotest_common.sh@10 -- # set +x 00:21:20.571 05:23:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:20.571 [2024-11-19 05:23:37.063948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:20.571 [2024-11-19 05:23:37.064000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.571 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.829 [2024-11-19 05:23:37.136926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.829 [2024-11-19 05:23:37.176578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:20.829 [2024-11-19 05:23:37.176713] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.829 [2024-11-19 05:23:37.176723] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.829 [2024-11-19 05:23:37.176735] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.829 [2024-11-19 05:23:37.176786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.829 [2024-11-19 05:23:37.176887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.829 [2024-11-19 05:23:37.176952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.829 [2024-11-19 05:23:37.176954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.393 05:23:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.393 05:23:37 -- common/autotest_common.sh@862 -- # return 0 00:21:21.393 05:23:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:21.393 05:23:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.393 05:23:37 -- common/autotest_common.sh@10 -- # set +x 00:21:21.393 05:23:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.393 05:23:37 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:21.393 05:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.393 05:23:37 -- common/autotest_common.sh@10 -- # set +x 00:21:21.393 [2024-11-19 05:23:37.954911] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xff1200/0xff56f0) succeed. 00:21:21.651 [2024-11-19 05:23:37.964063] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xff27f0/0x1036d90) succeed. 00:21:21.651 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:21.652 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.652 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 Malloc1 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 [2024-11-19 05:23:38.137417] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.652 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 Malloc2 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.652 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.652 Malloc3 00:21:21.652 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.652 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:21.652 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.652 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.910 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 Malloc4 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.910 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 Malloc5 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.910 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 Malloc6 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.910 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 Malloc7 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.910 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 Malloc8 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.910 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:21.910 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.910 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.168 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 Malloc9 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.168 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 Malloc10 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.168 05:23:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 Malloc11 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:22.168 05:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.168 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 05:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.168 05:23:38 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:22.168 05:23:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.168 05:23:38 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:23.113 05:23:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:23.113 05:23:39 -- common/autotest_common.sh@1187 -- # local i=0 00:21:23.113 05:23:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:23.113 05:23:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:23.113 05:23:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:25.636 05:23:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:25.636 05:23:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:25.636 05:23:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:21:25.636 05:23:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:25.636 05:23:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:25.636 05:23:41 -- common/autotest_common.sh@1197 -- # return 0 00:21:25.636 05:23:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:25.636 05:23:41 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:26.200 05:23:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:26.200 05:23:42 -- common/autotest_common.sh@1187 -- # local i=0 00:21:26.200 05:23:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:26.200 05:23:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:26.200 05:23:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:28.096 05:23:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:28.096 05:23:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:28.096 05:23:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:21:28.096 05:23:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:28.096 05:23:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:28.096 05:23:44 -- common/autotest_common.sh@1197 -- # return 0 00:21:28.096 05:23:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:28.096 05:23:44 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:29.468 05:23:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:29.468 05:23:45 -- common/autotest_common.sh@1187 -- # local i=0 00:21:29.468 05:23:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:29.468 05:23:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:29.468 05:23:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:31.365 05:23:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:31.365 05:23:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:31.365 05:23:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:21:31.365 05:23:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:31.365 05:23:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:31.365 05:23:47 -- common/autotest_common.sh@1197 -- # return 0 00:21:31.365 05:23:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.365 05:23:47 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:32.298 05:23:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:32.298 05:23:48 -- common/autotest_common.sh@1187 -- # local i=0 00:21:32.298 05:23:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:32.298 05:23:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:32.298 05:23:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:34.196 05:23:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:34.196 05:23:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:34.196 05:23:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:21:34.196 05:23:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:34.196 05:23:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:34.196 05:23:50 -- common/autotest_common.sh@1197 -- # return 0 00:21:34.196 05:23:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:34.196 05:23:50 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:35.129 05:23:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:35.129 05:23:51 -- common/autotest_common.sh@1187 -- # local i=0 00:21:35.129 05:23:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:35.129 05:23:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:35.129 05:23:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:37.654 05:23:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:37.654 05:23:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:37.654 05:23:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:21:37.654 05:23:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:37.654 05:23:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:37.654 05:23:53 -- common/autotest_common.sh@1197 -- # return 0 00:21:37.654 05:23:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:37.654 05:23:53 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:38.219 05:23:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:38.219 05:23:54 -- common/autotest_common.sh@1187 -- # local i=0 00:21:38.219 05:23:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:38.219 05:23:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:38.219 05:23:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:40.117 05:23:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:40.117 05:23:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:40.117 05:23:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:21:40.374 05:23:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:40.374 05:23:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:40.374 05:23:56 -- common/autotest_common.sh@1197 -- # return 0 00:21:40.374 05:23:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.374 05:23:56 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:41.306 05:23:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:41.306 05:23:57 -- common/autotest_common.sh@1187 -- # local i=0 00:21:41.306 05:23:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:41.306 05:23:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:41.306 05:23:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:43.205 05:23:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:43.205 05:23:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:43.205 05:23:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:21:43.205 05:23:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:43.205 05:23:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:43.205 05:23:59 -- common/autotest_common.sh@1197 -- # return 0 00:21:43.205 05:23:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:43.205 05:23:59 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:44.574 05:24:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:44.574 05:24:00 -- common/autotest_common.sh@1187 -- # local i=0 00:21:44.574 05:24:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:44.574 05:24:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:44.574 05:24:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:46.471 05:24:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:46.471 05:24:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:46.471 05:24:02 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:21:46.471 05:24:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:46.471 05:24:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:46.471 05:24:02 -- common/autotest_common.sh@1197 -- # return 0 00:21:46.471 05:24:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.471 05:24:02 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:47.404 05:24:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:47.404 05:24:03 -- common/autotest_common.sh@1187 -- # local i=0 00:21:47.404 05:24:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:47.404 05:24:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:47.404 05:24:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:49.305 05:24:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:49.305 05:24:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:49.305 05:24:05 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:21:49.305 05:24:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:49.305 05:24:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.305 05:24:05 -- common/autotest_common.sh@1197 -- # return 0 00:21:49.305 05:24:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.305 05:24:05 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:50.237 05:24:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:50.237 05:24:06 -- common/autotest_common.sh@1187 -- # local i=0 00:21:50.237 05:24:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.237 05:24:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:50.237 05:24:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:52.763 05:24:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:52.763 05:24:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:52.763 05:24:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:21:52.763 05:24:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:52.763 05:24:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.763 05:24:08 -- common/autotest_common.sh@1197 -- # return 0 00:21:52.763 05:24:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.763 05:24:08 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:21:53.327 05:24:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:53.327 05:24:09 -- common/autotest_common.sh@1187 -- # local i=0 00:21:53.327 05:24:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:53.327 05:24:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:53.327 05:24:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:55.225 05:24:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:55.225 05:24:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:55.225 05:24:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:21:55.482 05:24:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:55.482 05:24:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.482 05:24:11 -- common/autotest_common.sh@1197 -- # return 0 00:21:55.482 05:24:11 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:55.482 [global] 00:21:55.482 thread=1 00:21:55.482 invalidate=1 00:21:55.482 rw=read 00:21:55.482 time_based=1 00:21:55.482 runtime=10 00:21:55.482 ioengine=libaio 00:21:55.482 direct=1 00:21:55.482 bs=262144 00:21:55.482 iodepth=64 00:21:55.482 norandommap=1 00:21:55.482 numjobs=1 00:21:55.482 00:21:55.482 [job0] 00:21:55.482 filename=/dev/nvme0n1 00:21:55.482 [job1] 00:21:55.482 filename=/dev/nvme10n1 00:21:55.482 [job2] 00:21:55.482 filename=/dev/nvme1n1 00:21:55.482 [job3] 00:21:55.482 filename=/dev/nvme2n1 00:21:55.482 [job4] 00:21:55.482 filename=/dev/nvme3n1 00:21:55.482 [job5] 00:21:55.482 filename=/dev/nvme4n1 00:21:55.482 [job6] 00:21:55.482 filename=/dev/nvme5n1 00:21:55.482 [job7] 00:21:55.482 filename=/dev/nvme6n1 00:21:55.482 [job8] 00:21:55.482 filename=/dev/nvme7n1 00:21:55.482 [job9] 00:21:55.482 filename=/dev/nvme8n1 00:21:55.482 [job10] 00:21:55.482 filename=/dev/nvme9n1 00:21:55.748 Could not set queue depth (nvme0n1) 00:21:55.748 Could not set queue depth (nvme10n1) 00:21:55.748 Could not set queue depth (nvme1n1) 00:21:55.748 Could not set queue depth (nvme2n1) 00:21:55.748 Could not set queue depth (nvme3n1) 00:21:55.748 Could not set queue depth (nvme4n1) 00:21:55.748 Could not set queue depth (nvme5n1) 00:21:55.748 Could not set queue depth (nvme6n1) 00:21:55.748 Could not set queue depth (nvme7n1) 00:21:55.748 Could not set queue depth (nvme8n1) 00:21:55.748 Could not set queue depth (nvme9n1) 00:21:56.005 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:56.005 fio-3.35 00:21:56.005 Starting 11 threads 00:22:08.334 00:22:08.334 job0: (groupid=0, jobs=1): err= 0: pid=1871685: Tue Nov 19 05:24:22 2024 00:22:08.334 read: IOPS=918, BW=230MiB/s (241MB/s)(2312MiB/10065msec) 00:22:08.334 slat (usec): min=16, max=26989, avg=1078.80, stdev=2948.35 00:22:08.334 clat (msec): min=12, max=136, avg=68.51, stdev= 9.69 00:22:08.334 lat (msec): min=13, max=136, avg=69.59, stdev=10.16 00:22:08.334 clat percentiles (msec): 00:22:08.334 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 63], 20.00th=[ 64], 00:22:08.334 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:22:08.334 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 88], 00:22:08.335 | 99.00th=[ 95], 99.50th=[ 106], 99.90th=[ 133], 99.95th=[ 138], 00:22:08.335 | 99.99th=[ 138] 00:22:08.335 bw ( KiB/s): min=179712, max=253440, per=5.83%, avg=235059.20, stdev=26522.18, samples=20 00:22:08.335 iops : min= 702, max= 990, avg=918.20, stdev=103.60, samples=20 00:22:08.335 lat (msec) : 20=0.26%, 50=0.35%, 100=98.59%, 250=0.80% 00:22:08.335 cpu : usr=0.34%, sys=4.49%, ctx=1753, majf=0, minf=4097 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.335 issued rwts: total=9246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.335 job1: (groupid=0, jobs=1): err= 0: pid=1871705: Tue Nov 19 05:24:22 2024 00:22:08.335 read: IOPS=919, BW=230MiB/s (241MB/s)(2313MiB/10063msec) 00:22:08.335 slat (usec): min=14, max=41527, avg=1076.14, stdev=3049.61 00:22:08.335 clat (msec): min=13, max=149, avg=68.46, stdev= 9.51 00:22:08.335 lat (msec): min=13, max=149, avg=69.53, stdev=10.03 00:22:08.335 clat percentiles (msec): 00:22:08.335 | 1.00th=[ 61], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 64], 00:22:08.335 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:22:08.335 | 70.00th=[ 67], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 88], 00:22:08.335 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 138], 99.95th=[ 142], 00:22:08.335 | 99.99th=[ 150] 00:22:08.335 bw ( KiB/s): min=183296, max=258048, per=5.84%, avg=235264.00, stdev=25750.20, samples=20 00:22:08.335 iops : min= 716, max= 1008, avg=919.00, stdev=100.59, samples=20 00:22:08.335 lat (msec) : 20=0.23%, 50=0.34%, 100=99.11%, 250=0.32% 00:22:08.335 cpu : usr=0.40%, sys=4.57%, ctx=1774, majf=0, minf=4097 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.335 issued rwts: total=9253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.335 job2: (groupid=0, jobs=1): err= 0: pid=1871723: Tue Nov 19 05:24:22 2024 00:22:08.335 read: IOPS=918, BW=230MiB/s (241MB/s)(2311MiB/10064msec) 00:22:08.335 slat (usec): min=16, max=20456, avg=1077.70, stdev=2670.33 00:22:08.335 clat (msec): min=13, max=144, avg=68.52, stdev= 9.77 00:22:08.335 lat (msec): min=13, max=159, avg=69.60, stdev=10.17 00:22:08.335 clat percentiles (msec): 00:22:08.335 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 63], 20.00th=[ 64], 00:22:08.335 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:22:08.335 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 88], 00:22:08.335 | 99.00th=[ 94], 99.50th=[ 102], 99.90th=[ 140], 99.95th=[ 144], 00:22:08.335 | 99.99th=[ 144] 00:22:08.335 bw ( KiB/s): min=175616, max=254464, per=5.83%, avg=235058.10, stdev=26156.63, samples=20 00:22:08.335 iops : min= 686, max= 994, avg=918.15, stdev=102.16, samples=20 00:22:08.335 lat (msec) : 20=0.25%, 50=0.34%, 100=98.77%, 250=0.65% 00:22:08.335 cpu : usr=0.48%, sys=4.50%, ctx=1789, majf=0, minf=4097 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.335 issued rwts: total=9244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.335 job3: (groupid=0, jobs=1): err= 0: pid=1871734: Tue Nov 19 05:24:22 2024 00:22:08.335 read: IOPS=2926, BW=732MiB/s (767MB/s)(7362MiB/10064msec) 00:22:08.335 slat (usec): min=10, max=64301, avg=337.82, stdev=1433.19 00:22:08.335 clat (msec): min=10, max=152, avg=21.51, stdev=13.36 00:22:08.335 lat (msec): min=11, max=153, avg=21.85, stdev=13.60 00:22:08.335 clat percentiles (msec): 00:22:08.335 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:22:08.335 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:22:08.335 | 70.00th=[ 30], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 34], 00:22:08.335 | 99.00th=[ 89], 99.50th=[ 90], 99.90th=[ 121], 99.95th=[ 127], 00:22:08.335 | 99.99th=[ 132] 00:22:08.335 bw ( KiB/s): min=165888, max=1128960, per=18.67%, avg=752304.75, stdev=325083.47, samples=20 00:22:08.335 iops : min= 648, max= 4410, avg=2938.65, stdev=1269.89, samples=20 00:22:08.335 lat (msec) : 20=65.59%, 50=32.06%, 100=2.13%, 250=0.22% 00:22:08.335 cpu : usr=0.56%, sys=7.63%, ctx=5239, majf=0, minf=4097 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.335 issued rwts: total=29449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.335 job4: (groupid=0, jobs=1): err= 0: pid=1871740: Tue Nov 19 05:24:22 2024 00:22:08.335 read: IOPS=3425, BW=856MiB/s (898MB/s)(8595MiB/10036msec) 00:22:08.335 slat (usec): min=10, max=13851, avg=287.97, stdev=711.18 00:22:08.335 clat (usec): min=800, max=75561, avg=18371.29, stdev=7836.11 00:22:08.335 lat (usec): min=843, max=81210, avg=18659.26, stdev=7963.90 00:22:08.335 clat percentiles (usec): 00:22:08.335 | 1.00th=[13698], 5.00th=[14353], 10.00th=[14615], 20.00th=[15008], 00:22:08.335 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15664], 60.00th=[15926], 00:22:08.335 | 70.00th=[16188], 80.00th=[16581], 90.00th=[30540], 95.00th=[41157], 00:22:08.335 | 99.00th=[46400], 99.50th=[47449], 99.90th=[59507], 99.95th=[65274], 00:22:08.335 | 99.99th=[74974] 00:22:08.335 bw ( KiB/s): min=370176, max=1055744, per=21.80%, avg=878643.70, stdev=268648.24, samples=20 00:22:08.335 iops : min= 1446, max= 4124, avg=3432.20, stdev=1049.41, samples=20 00:22:08.335 lat (usec) : 1000=0.01% 00:22:08.335 lat (msec) : 2=0.04%, 4=0.11%, 10=0.39%, 20=85.83%, 50=13.37% 00:22:08.335 lat (msec) : 100=0.26% 00:22:08.335 cpu : usr=0.43%, sys=8.13%, ctx=6944, majf=0, minf=4097 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.335 issued rwts: total=34381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.335 job5: (groupid=0, jobs=1): err= 0: pid=1871765: Tue Nov 19 05:24:22 2024 00:22:08.335 read: IOPS=1559, BW=390MiB/s (409MB/s)(3909MiB/10029msec) 00:22:08.335 slat (usec): min=11, max=54446, avg=624.65, stdev=2437.70 00:22:08.335 clat (msec): min=11, max=125, avg=40.38, stdev=17.78 00:22:08.335 lat (msec): min=11, max=132, avg=41.00, stdev=18.16 00:22:08.335 clat percentiles (msec): 00:22:08.335 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:22:08.335 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:22:08.335 | 70.00th=[ 35], 80.00th=[ 49], 90.00th=[ 79], 95.00th=[ 81], 00:22:08.335 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 123], 99.95th=[ 124], 00:22:08.335 | 99.99th=[ 125] 00:22:08.335 bw ( KiB/s): min=197632, max=547328, per=9.89%, avg=398694.40, stdev=136866.46, samples=20 00:22:08.335 iops : min= 772, max= 2138, avg=1557.40, stdev=534.63, samples=20 00:22:08.335 lat (msec) : 20=0.67%, 50=81.40%, 100=17.73%, 250=0.20% 00:22:08.335 cpu : usr=0.39%, sys=4.97%, ctx=3052, majf=0, minf=4097 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.335 issued rwts: total=15637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.335 job6: (groupid=0, jobs=1): err= 0: pid=1871777: Tue Nov 19 05:24:22 2024 00:22:08.335 read: IOPS=865, BW=216MiB/s (227MB/s)(2179MiB/10065msec) 00:22:08.335 slat (usec): min=12, max=37054, avg=1135.06, stdev=2979.65 00:22:08.335 clat (msec): min=12, max=150, avg=72.69, stdev=10.79 00:22:08.335 lat (msec): min=12, max=154, avg=73.83, stdev=11.25 00:22:08.335 clat percentiles (msec): 00:22:08.335 | 1.00th=[ 60], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 64], 00:22:08.335 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 73], 60.00th=[ 79], 00:22:08.335 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 89], 00:22:08.335 | 99.00th=[ 96], 99.50th=[ 110], 99.90th=[ 148], 99.95th=[ 150], 00:22:08.335 | 99.99th=[ 150] 00:22:08.335 bw ( KiB/s): min=180736, max=256000, per=5.50%, avg=221516.80, stdev=27051.35, samples=20 00:22:08.335 iops : min= 706, max= 1000, avg=865.30, stdev=105.67, samples=20 00:22:08.335 lat (msec) : 20=0.34%, 50=0.36%, 100=98.66%, 250=0.64% 00:22:08.335 cpu : usr=0.33%, sys=4.18%, ctx=1730, majf=0, minf=4097 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:08.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.335 issued rwts: total=8716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.335 job7: (groupid=0, jobs=1): err= 0: pid=1871786: Tue Nov 19 05:24:22 2024 00:22:08.335 read: IOPS=1160, BW=290MiB/s (304MB/s)(2913MiB/10038msec) 00:22:08.335 slat (usec): min=12, max=44578, avg=818.41, stdev=2618.60 00:22:08.335 clat (msec): min=11, max=126, avg=54.26, stdev=18.95 00:22:08.335 lat (msec): min=12, max=126, avg=55.07, stdev=19.38 00:22:08.335 clat percentiles (msec): 00:22:08.335 | 1.00th=[ 18], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:22:08.335 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 63], 60.00th=[ 64], 00:22:08.335 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 81], 00:22:08.335 | 99.00th=[ 85], 99.50th=[ 90], 99.90th=[ 111], 99.95th=[ 120], 00:22:08.335 | 99.99th=[ 122] 00:22:08.335 bw ( KiB/s): min=202240, max=523776, per=7.36%, avg=296678.40, stdev=96404.88, samples=20 00:22:08.335 iops : min= 790, max= 2046, avg=1158.90, stdev=376.58, samples=20 00:22:08.335 lat (msec) : 20=1.77%, 50=42.19%, 100=55.78%, 250=0.26% 00:22:08.335 cpu : usr=0.31%, sys=4.86%, ctx=2600, majf=0, minf=3659 00:22:08.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.336 issued rwts: total=11652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.336 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.336 job8: (groupid=0, jobs=1): err= 0: pid=1871803: Tue Nov 19 05:24:22 2024 00:22:08.336 read: IOPS=920, BW=230MiB/s (241MB/s)(2317MiB/10065msec) 00:22:08.336 slat (usec): min=13, max=20074, avg=1074.46, stdev=2682.82 00:22:08.336 clat (msec): min=9, max=151, avg=68.37, stdev=10.48 00:22:08.336 lat (msec): min=9, max=151, avg=69.45, stdev=10.86 00:22:08.336 clat percentiles (msec): 00:22:08.336 | 1.00th=[ 61], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 63], 00:22:08.336 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:22:08.336 | 70.00th=[ 67], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 88], 00:22:08.336 | 99.00th=[ 95], 99.50th=[ 104], 99.90th=[ 146], 99.95th=[ 146], 00:22:08.336 | 99.99th=[ 153] 00:22:08.336 bw ( KiB/s): min=172544, max=260096, per=5.85%, avg=235648.00, stdev=26962.77, samples=20 00:22:08.336 iops : min= 674, max= 1016, avg=920.50, stdev=105.32, samples=20 00:22:08.336 lat (msec) : 10=0.08%, 20=0.36%, 50=0.42%, 100=98.49%, 250=0.66% 00:22:08.336 cpu : usr=0.40%, sys=4.53%, ctx=1786, majf=0, minf=4098 00:22:08.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.336 issued rwts: total=9268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.336 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.336 job9: (groupid=0, jobs=1): err= 0: pid=1871805: Tue Nov 19 05:24:22 2024 00:22:08.336 read: IOPS=1032, BW=258MiB/s (271MB/s)(2590MiB/10037msec) 00:22:08.336 slat (usec): min=12, max=24432, avg=958.33, stdev=2523.43 00:22:08.336 clat (msec): min=12, max=102, avg=60.99, stdev=13.59 00:22:08.336 lat (msec): min=12, max=104, avg=61.95, stdev=13.97 00:22:08.336 clat percentiles (msec): 00:22:08.336 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 47], 00:22:08.336 | 30.00th=[ 50], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 65], 00:22:08.336 | 70.00th=[ 67], 80.00th=[ 78], 90.00th=[ 80], 95.00th=[ 81], 00:22:08.336 | 99.00th=[ 86], 99.50th=[ 88], 99.90th=[ 97], 99.95th=[ 99], 00:22:08.336 | 99.99th=[ 104] 00:22:08.336 bw ( KiB/s): min=198144, max=381952, per=6.54%, avg=263577.60, stdev=58876.98, samples=20 00:22:08.336 iops : min= 774, max= 1492, avg=1029.60, stdev=229.99, samples=20 00:22:08.336 lat (msec) : 20=0.27%, 50=31.82%, 100=67.89%, 250=0.02% 00:22:08.336 cpu : usr=0.39%, sys=4.87%, ctx=2019, majf=0, minf=4097 00:22:08.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.336 issued rwts: total=10359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.336 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.336 job10: (groupid=0, jobs=1): err= 0: pid=1871806: Tue Nov 19 05:24:22 2024 00:22:08.336 read: IOPS=1120, BW=280MiB/s (294MB/s)(2808MiB/10027msec) 00:22:08.336 slat (usec): min=12, max=26161, avg=886.15, stdev=2341.29 00:22:08.336 clat (msec): min=12, max=104, avg=56.19, stdev=18.31 00:22:08.336 lat (msec): min=13, max=105, avg=57.07, stdev=18.70 00:22:08.336 clat percentiles (msec): 00:22:08.336 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 32], 00:22:08.336 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 63], 60.00th=[ 64], 00:22:08.336 | 70.00th=[ 66], 80.00th=[ 75], 90.00th=[ 80], 95.00th=[ 81], 00:22:08.336 | 99.00th=[ 86], 99.50th=[ 90], 99.90th=[ 97], 99.95th=[ 100], 00:22:08.336 | 99.99th=[ 104] 00:22:08.336 bw ( KiB/s): min=200192, max=547328, per=7.10%, avg=285952.00, stdev=108050.90, samples=20 00:22:08.336 iops : min= 782, max= 2138, avg=1117.00, stdev=422.07, samples=20 00:22:08.336 lat (msec) : 20=0.22%, 50=37.63%, 100=62.10%, 250=0.04% 00:22:08.336 cpu : usr=0.55%, sys=5.07%, ctx=2121, majf=0, minf=4097 00:22:08.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:08.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:08.336 issued rwts: total=11233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.336 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.336 00:22:08.336 Run status group 0 (all jobs): 00:22:08.336 READ: bw=3935MiB/s (4127MB/s), 216MiB/s-856MiB/s (227MB/s-898MB/s), io=38.7GiB (41.5GB), run=10027-10065msec 00:22:08.336 00:22:08.336 Disk stats (read/write): 00:22:08.336 nvme0n1: ios=18187/0, merge=0/0, ticks=1218024/0, in_queue=1218024, util=96.77% 00:22:08.336 nvme10n1: ios=18192/0, merge=0/0, ticks=1219030/0, in_queue=1219030, util=97.01% 00:22:08.336 nvme1n1: ios=18206/0, merge=0/0, ticks=1220169/0, in_queue=1220169, util=97.36% 00:22:08.336 nvme2n1: ios=58644/0, merge=0/0, ticks=1214917/0, in_queue=1214917, util=97.54% 00:22:08.336 nvme3n1: ios=68303/0, merge=0/0, ticks=1211784/0, in_queue=1211784, util=97.63% 00:22:08.336 nvme4n1: ios=30668/0, merge=0/0, ticks=1219296/0, in_queue=1219296, util=98.07% 00:22:08.336 nvme5n1: ios=17114/0, merge=0/0, ticks=1217851/0, in_queue=1217851, util=98.26% 00:22:08.336 nvme6n1: ios=22853/0, merge=0/0, ticks=1222934/0, in_queue=1222934, util=98.41% 00:22:08.336 nvme7n1: ios=18264/0, merge=0/0, ticks=1220857/0, in_queue=1220857, util=98.91% 00:22:08.336 nvme8n1: ios=20239/0, merge=0/0, ticks=1222103/0, in_queue=1222103, util=99.14% 00:22:08.336 nvme9n1: ios=21841/0, merge=0/0, ticks=1222670/0, in_queue=1222670, util=99.29% 00:22:08.336 05:24:22 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:08.336 [global] 00:22:08.336 thread=1 00:22:08.336 invalidate=1 00:22:08.336 rw=randwrite 00:22:08.336 time_based=1 00:22:08.336 runtime=10 00:22:08.336 ioengine=libaio 00:22:08.336 direct=1 00:22:08.336 bs=262144 00:22:08.336 iodepth=64 00:22:08.336 norandommap=1 00:22:08.336 numjobs=1 00:22:08.336 00:22:08.336 [job0] 00:22:08.336 filename=/dev/nvme0n1 00:22:08.336 [job1] 00:22:08.336 filename=/dev/nvme10n1 00:22:08.336 [job2] 00:22:08.336 filename=/dev/nvme1n1 00:22:08.336 [job3] 00:22:08.336 filename=/dev/nvme2n1 00:22:08.336 [job4] 00:22:08.336 filename=/dev/nvme3n1 00:22:08.336 [job5] 00:22:08.336 filename=/dev/nvme4n1 00:22:08.336 [job6] 00:22:08.336 filename=/dev/nvme5n1 00:22:08.336 [job7] 00:22:08.336 filename=/dev/nvme6n1 00:22:08.336 [job8] 00:22:08.336 filename=/dev/nvme7n1 00:22:08.336 [job9] 00:22:08.336 filename=/dev/nvme8n1 00:22:08.336 [job10] 00:22:08.336 filename=/dev/nvme9n1 00:22:08.336 Could not set queue depth (nvme0n1) 00:22:08.336 Could not set queue depth (nvme10n1) 00:22:08.336 Could not set queue depth (nvme1n1) 00:22:08.336 Could not set queue depth (nvme2n1) 00:22:08.336 Could not set queue depth (nvme3n1) 00:22:08.336 Could not set queue depth (nvme4n1) 00:22:08.336 Could not set queue depth (nvme5n1) 00:22:08.336 Could not set queue depth (nvme6n1) 00:22:08.336 Could not set queue depth (nvme7n1) 00:22:08.336 Could not set queue depth (nvme8n1) 00:22:08.336 Could not set queue depth (nvme9n1) 00:22:08.336 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:08.336 fio-3.35 00:22:08.336 Starting 11 threads 00:22:18.307 00:22:18.307 job0: (groupid=0, jobs=1): err= 0: pid=1873560: Tue Nov 19 05:24:33 2024 00:22:18.307 write: IOPS=901, BW=225MiB/s (236MB/s)(2266MiB/10058msec); 0 zone resets 00:22:18.307 slat (usec): min=24, max=11519, avg=1091.90, stdev=1961.99 00:22:18.307 clat (msec): min=2, max=140, avg=69.91, stdev=10.84 00:22:18.308 lat (msec): min=2, max=140, avg=71.00, stdev=10.89 00:22:18.308 clat percentiles (msec): 00:22:18.308 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 56], 00:22:18.308 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 75], 60.00th=[ 75], 00:22:18.308 | 70.00th=[ 77], 80.00th=[ 77], 90.00th=[ 79], 95.00th=[ 80], 00:22:18.308 | 99.00th=[ 83], 99.50th=[ 88], 99.90th=[ 126], 99.95th=[ 132], 00:22:18.308 | 99.99th=[ 142] 00:22:18.308 bw ( KiB/s): min=209408, max=306176, per=6.58%, avg=230382.20, stdev=31477.88, samples=20 00:22:18.308 iops : min= 818, max= 1196, avg=899.90, stdev=122.97, samples=20 00:22:18.308 lat (msec) : 4=0.12%, 10=0.09%, 20=0.12%, 50=1.60%, 100=97.71% 00:22:18.308 lat (msec) : 250=0.36% 00:22:18.308 cpu : usr=2.31%, sys=3.76%, ctx=2247, majf=0, minf=1 00:22:18.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:18.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.308 issued rwts: total=0,9064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.308 job1: (groupid=0, jobs=1): err= 0: pid=1873572: Tue Nov 19 05:24:33 2024 00:22:18.308 write: IOPS=1426, BW=357MiB/s (374MB/s)(3586MiB/10054msec); 0 zone resets 00:22:18.308 slat (usec): min=22, max=10857, avg=684.60, stdev=1348.67 00:22:18.308 clat (msec): min=11, max=138, avg=44.16, stdev= 9.59 00:22:18.308 lat (msec): min=11, max=138, avg=44.85, stdev= 9.73 00:22:18.308 clat percentiles (msec): 00:22:18.308 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:22:18.308 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 51], 00:22:18.308 | 70.00th=[ 53], 80.00th=[ 54], 90.00th=[ 55], 95.00th=[ 57], 00:22:18.308 | 99.00th=[ 70], 99.50th=[ 77], 99.90th=[ 121], 99.95th=[ 123], 00:22:18.308 | 99.99th=[ 140] 00:22:18.308 bw ( KiB/s): min=259584, max=442368, per=10.44%, avg=365598.35, stdev=70269.09, samples=20 00:22:18.308 iops : min= 1014, max= 1728, avg=1428.10, stdev=274.51, samples=20 00:22:18.308 lat (msec) : 20=0.08%, 50=58.64%, 100=41.07%, 250=0.21% 00:22:18.308 cpu : usr=3.12%, sys=4.22%, ctx=3529, majf=0, minf=1 00:22:18.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.308 issued rwts: total=0,14343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.308 job2: (groupid=0, jobs=1): err= 0: pid=1873573: Tue Nov 19 05:24:33 2024 00:22:18.308 write: IOPS=910, BW=228MiB/s (239MB/s)(2285MiB/10042msec); 0 zone resets 00:22:18.308 slat (usec): min=24, max=14177, avg=1089.26, stdev=1943.75 00:22:18.308 clat (usec): min=18869, max=96709, avg=69189.26, stdev=11335.54 00:22:18.308 lat (msec): min=18, max=101, avg=70.28, stdev=11.38 00:22:18.308 clat percentiles (usec): 00:22:18.308 | 1.00th=[35390], 5.00th=[38536], 10.00th=[54264], 20.00th=[57410], 00:22:18.308 | 30.00th=[69731], 40.00th=[72877], 50.00th=[74974], 60.00th=[74974], 00:22:18.308 | 70.00th=[76022], 80.00th=[76022], 90.00th=[78119], 95.00th=[79168], 00:22:18.308 | 99.00th=[81265], 99.50th=[82314], 99.90th=[90702], 99.95th=[96994], 00:22:18.308 | 99.99th=[96994] 00:22:18.308 bw ( KiB/s): min=206236, max=354304, per=6.64%, avg=232417.40, stdev=38972.74, samples=20 00:22:18.308 iops : min= 805, max= 1384, avg=907.85, stdev=152.26, samples=20 00:22:18.308 lat (msec) : 20=0.04%, 50=6.27%, 100=93.69% 00:22:18.308 cpu : usr=2.18%, sys=4.13%, ctx=2256, majf=0, minf=1 00:22:18.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:18.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.308 issued rwts: total=0,9141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.308 job3: (groupid=0, jobs=1): err= 0: pid=1873574: Tue Nov 19 05:24:33 2024 00:22:18.308 write: IOPS=909, BW=227MiB/s (238MB/s)(2283MiB/10042msec); 0 zone resets 00:22:18.308 slat (usec): min=23, max=8664, avg=1067.06, stdev=1993.64 00:22:18.308 clat (usec): min=9558, max=85958, avg=69275.47, stdev=10961.75 00:22:18.308 lat (usec): min=9637, max=86900, avg=70342.53, stdev=11021.01 00:22:18.308 clat percentiles (usec): 00:22:18.308 | 1.00th=[35914], 5.00th=[51119], 10.00th=[53216], 20.00th=[55837], 00:22:18.308 | 30.00th=[69731], 40.00th=[72877], 50.00th=[74974], 60.00th=[74974], 00:22:18.308 | 70.00th=[76022], 80.00th=[76022], 90.00th=[77071], 95.00th=[79168], 00:22:18.308 | 99.00th=[80217], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:22:18.308 | 99.99th=[85459] 00:22:18.308 bw ( KiB/s): min=208896, max=329728, per=6.63%, avg=232192.00, stdev=36518.62, samples=20 00:22:18.308 iops : min= 816, max= 1288, avg=907.00, stdev=142.65, samples=20 00:22:18.308 lat (msec) : 10=0.01%, 20=0.38%, 50=3.54%, 100=96.07% 00:22:18.308 cpu : usr=2.02%, sys=4.34%, ctx=2305, majf=0, minf=1 00:22:18.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:18.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.308 issued rwts: total=0,9133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.308 job4: (groupid=0, jobs=1): err= 0: pid=1873575: Tue Nov 19 05:24:33 2024 00:22:18.308 write: IOPS=1602, BW=401MiB/s (420MB/s)(4018MiB/10027msec); 0 zone resets 00:22:18.308 slat (usec): min=24, max=9066, avg=618.20, stdev=1134.78 00:22:18.308 clat (usec): min=4495, max=61660, avg=39297.52, stdev=6322.62 00:22:18.308 lat (usec): min=4546, max=64232, avg=39915.71, stdev=6363.49 00:22:18.308 clat percentiles (usec): 00:22:18.308 | 1.00th=[21627], 5.00th=[34341], 10.00th=[35390], 20.00th=[36439], 00:22:18.308 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:22:18.308 | 70.00th=[38536], 80.00th=[39060], 90.00th=[52691], 95.00th=[54789], 00:22:18.308 | 99.00th=[57410], 99.50th=[57934], 99.90th=[60031], 99.95th=[61080], 00:22:18.308 | 99.99th=[61604] 00:22:18.308 bw ( KiB/s): min=296960, max=437760, per=11.71%, avg=409830.40, stdev=45424.25, samples=20 00:22:18.308 iops : min= 1160, max= 1710, avg=1600.90, stdev=177.44, samples=20 00:22:18.308 lat (msec) : 10=0.04%, 20=0.77%, 50=86.57%, 100=12.62% 00:22:18.308 cpu : usr=3.55%, sys=5.47%, ctx=3938, majf=0, minf=1 00:22:18.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.308 issued rwts: total=0,16072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.308 job5: (groupid=0, jobs=1): err= 0: pid=1873576: Tue Nov 19 05:24:33 2024 00:22:18.308 write: IOPS=1176, BW=294MiB/s (308MB/s)(2954MiB/10042msec); 0 zone resets 00:22:18.308 slat (usec): min=24, max=9787, avg=828.17, stdev=1594.44 00:22:18.308 clat (msec): min=9, max=100, avg=53.54, stdev= 4.74 00:22:18.308 lat (msec): min=9, max=101, avg=54.36, stdev= 4.87 00:22:18.308 clat percentiles (usec): 00:22:18.308 | 1.00th=[35390], 5.00th=[49546], 10.00th=[51119], 20.00th=[52167], 00:22:18.308 | 30.00th=[52691], 40.00th=[53216], 50.00th=[53740], 60.00th=[54264], 00:22:18.308 | 70.00th=[55313], 80.00th=[55837], 90.00th=[57410], 95.00th=[58459], 00:22:18.308 | 99.00th=[60556], 99.50th=[61604], 99.90th=[89654], 99.95th=[95945], 00:22:18.308 | 99.99th=[96994] 00:22:18.308 bw ( KiB/s): min=281088, max=345779, per=8.60%, avg=300936.95, stdev=12433.59, samples=20 00:22:18.308 iops : min= 1098, max= 1350, avg=1175.50, stdev=48.44, samples=20 00:22:18.308 lat (msec) : 10=0.07%, 20=0.11%, 50=5.50%, 100=94.31%, 250=0.01% 00:22:18.308 cpu : usr=2.72%, sys=4.86%, ctx=2880, majf=0, minf=1 00:22:18.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:18.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.308 issued rwts: total=0,11817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.308 job6: (groupid=0, jobs=1): err= 0: pid=1873577: Tue Nov 19 05:24:33 2024 00:22:18.308 write: IOPS=1423, BW=356MiB/s (373MB/s)(3569MiB/10030msec); 0 zone resets 00:22:18.308 slat (usec): min=23, max=9073, avg=696.11, stdev=1306.18 00:22:18.308 clat (usec): min=4554, max=65783, avg=44252.11, stdev=8675.71 00:22:18.308 lat (usec): min=4610, max=65832, avg=44948.22, stdev=8796.02 00:22:18.308 clat percentiles (usec): 00:22:18.308 | 1.00th=[33817], 5.00th=[34866], 10.00th=[35914], 20.00th=[36963], 00:22:18.308 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38536], 60.00th=[50594], 00:22:18.308 | 70.00th=[52691], 80.00th=[54264], 90.00th=[55837], 95.00th=[56886], 00:22:18.308 | 99.00th=[58983], 99.50th=[59507], 99.90th=[61080], 99.95th=[62129], 00:22:18.308 | 99.99th=[65799] 00:22:18.308 bw ( KiB/s): min=294400, max=438784, per=10.39%, avg=363852.80, stdev=67916.61, samples=20 00:22:18.308 iops : min= 1150, max= 1714, avg=1421.30, stdev=265.30, samples=20 00:22:18.308 lat (msec) : 10=0.05%, 20=0.12%, 50=58.95%, 100=40.88% 00:22:18.308 cpu : usr=3.40%, sys=5.20%, ctx=3501, majf=0, minf=1 00:22:18.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.308 issued rwts: total=0,14276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.308 job7: (groupid=0, jobs=1): err= 0: pid=1873578: Tue Nov 19 05:24:33 2024 00:22:18.308 write: IOPS=910, BW=228MiB/s (239MB/s)(2287MiB/10042msec); 0 zone resets 00:22:18.308 slat (usec): min=26, max=10354, avg=1087.68, stdev=1950.16 00:22:18.308 clat (msec): min=14, max=100, avg=69.16, stdev=11.43 00:22:18.308 lat (msec): min=14, max=100, avg=70.24, stdev=11.46 00:22:18.308 clat percentiles (msec): 00:22:18.308 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 55], 20.00th=[ 58], 00:22:18.308 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 75], 60.00th=[ 75], 00:22:18.308 | 70.00th=[ 77], 80.00th=[ 77], 90.00th=[ 79], 95.00th=[ 79], 00:22:18.308 | 99.00th=[ 81], 99.50th=[ 84], 99.90th=[ 94], 99.95th=[ 97], 00:22:18.308 | 99.99th=[ 102] 00:22:18.309 bw ( KiB/s): min=209408, max=356352, per=6.64%, avg=232524.80, stdev=39087.22, samples=20 00:22:18.309 iops : min= 818, max= 1392, avg=908.30, stdev=152.68, samples=20 00:22:18.309 lat (msec) : 20=0.09%, 50=6.32%, 100=93.57%, 250=0.02% 00:22:18.309 cpu : usr=2.44%, sys=4.12%, ctx=2275, majf=0, minf=1 00:22:18.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:18.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.309 issued rwts: total=0,9146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.309 job8: (groupid=0, jobs=1): err= 0: pid=1873579: Tue Nov 19 05:24:33 2024 00:22:18.309 write: IOPS=1575, BW=394MiB/s (413MB/s)(3960MiB/10054msec); 0 zone resets 00:22:18.309 slat (usec): min=23, max=13455, avg=621.27, stdev=1169.33 00:22:18.309 clat (msec): min=10, max=133, avg=39.99, stdev= 7.65 00:22:18.309 lat (msec): min=10, max=133, avg=40.61, stdev= 7.71 00:22:18.309 clat percentiles (msec): 00:22:18.309 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:22:18.309 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 39], 00:22:18.309 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 54], 95.00th=[ 56], 00:22:18.309 | 99.00th=[ 66], 99.50th=[ 72], 99.90th=[ 118], 99.95th=[ 127], 00:22:18.309 | 99.99th=[ 130] 00:22:18.309 bw ( KiB/s): min=262656, max=442368, per=11.54%, avg=403840.00, stdev=56316.29, samples=20 00:22:18.309 iops : min= 1026, max= 1728, avg=1577.50, stdev=219.99, samples=20 00:22:18.309 lat (msec) : 20=0.08%, 50=85.83%, 100=13.90%, 250=0.19% 00:22:18.309 cpu : usr=3.21%, sys=5.53%, ctx=3869, majf=0, minf=1 00:22:18.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.309 issued rwts: total=0,15838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.309 job9: (groupid=0, jobs=1): err= 0: pid=1873580: Tue Nov 19 05:24:33 2024 00:22:18.309 write: IOPS=1435, BW=359MiB/s (376MB/s)(3608MiB/10053msec); 0 zone resets 00:22:18.309 slat (usec): min=22, max=12947, avg=688.80, stdev=1330.67 00:22:18.309 clat (msec): min=4, max=132, avg=43.88, stdev= 9.53 00:22:18.309 lat (msec): min=4, max=132, avg=44.56, stdev= 9.66 00:22:18.309 clat percentiles (msec): 00:22:18.309 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:22:18.309 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 51], 00:22:18.309 | 70.00th=[ 53], 80.00th=[ 54], 90.00th=[ 55], 95.00th=[ 56], 00:22:18.309 | 99.00th=[ 69], 99.50th=[ 73], 99.90th=[ 117], 99.95th=[ 121], 00:22:18.309 | 99.99th=[ 133] 00:22:18.309 bw ( KiB/s): min=262656, max=441856, per=10.51%, avg=367846.40, stdev=68635.64, samples=20 00:22:18.309 iops : min= 1026, max= 1726, avg=1436.90, stdev=268.11, samples=20 00:22:18.309 lat (msec) : 10=0.06%, 20=0.11%, 50=60.05%, 100=39.55%, 250=0.23% 00:22:18.309 cpu : usr=3.39%, sys=4.83%, ctx=3517, majf=0, minf=1 00:22:18.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.309 issued rwts: total=0,14432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.309 job10: (groupid=0, jobs=1): err= 0: pid=1873581: Tue Nov 19 05:24:33 2024 00:22:18.309 write: IOPS=1423, BW=356MiB/s (373MB/s)(3570MiB/10030msec); 0 zone resets 00:22:18.309 slat (usec): min=22, max=8722, avg=696.05, stdev=1315.52 00:22:18.309 clat (usec): min=9711, max=64839, avg=44239.12, stdev=8627.69 00:22:18.309 lat (usec): min=9766, max=64891, avg=44935.17, stdev=8746.39 00:22:18.309 clat percentiles (usec): 00:22:18.309 | 1.00th=[33817], 5.00th=[34866], 10.00th=[35914], 20.00th=[36963], 00:22:18.309 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38536], 60.00th=[50594], 00:22:18.309 | 70.00th=[52691], 80.00th=[54264], 90.00th=[55837], 95.00th=[56886], 00:22:18.309 | 99.00th=[58983], 99.50th=[60031], 99.90th=[61080], 99.95th=[62129], 00:22:18.309 | 99.99th=[64750] 00:22:18.309 bw ( KiB/s): min=294912, max=438272, per=10.40%, avg=363955.20, stdev=67889.79, samples=20 00:22:18.309 iops : min= 1152, max= 1712, avg=1421.70, stdev=265.19, samples=20 00:22:18.309 lat (msec) : 10=0.03%, 20=0.10%, 50=59.04%, 100=40.83% 00:22:18.309 cpu : usr=3.07%, sys=5.40%, ctx=3513, majf=0, minf=1 00:22:18.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:18.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:18.309 issued rwts: total=0,14280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.309 00:22:18.309 Run status group 0 (all jobs): 00:22:18.309 WRITE: bw=3419MiB/s (3585MB/s), 225MiB/s-401MiB/s (236MB/s-420MB/s), io=33.6GiB (36.1GB), run=10027-10058msec 00:22:18.309 00:22:18.309 Disk stats (read/write): 00:22:18.309 nvme0n1: ios=49/17728, merge=0/0, ticks=20/1212580, in_queue=1212600, util=96.38% 00:22:18.309 nvme10n1: ios=0/28284, merge=0/0, ticks=0/1213818, in_queue=1213818, util=96.55% 00:22:18.309 nvme1n1: ios=0/17788, merge=0/0, ticks=0/1210270, in_queue=1210270, util=96.97% 00:22:18.309 nvme2n1: ios=0/17629, merge=0/0, ticks=0/1217363, in_queue=1217363, util=97.33% 00:22:18.309 nvme3n1: ios=0/31262, merge=0/0, ticks=0/1216555, in_queue=1216555, util=97.47% 00:22:18.309 nvme4n1: ios=0/23140, merge=0/0, ticks=0/1212088, in_queue=1212088, util=97.84% 00:22:18.309 nvme5n1: ios=0/27860, merge=0/0, ticks=0/1215254, in_queue=1215254, util=98.06% 00:22:18.309 nvme6n1: ios=0/17801, merge=0/0, ticks=0/1218058, in_queue=1218058, util=98.19% 00:22:18.309 nvme7n1: ios=0/31265, merge=0/0, ticks=0/1214740, in_queue=1214740, util=98.66% 00:22:18.309 nvme8n1: ios=0/28465, merge=0/0, ticks=0/1213132, in_queue=1213132, util=98.86% 00:22:18.309 nvme9n1: ios=0/27861, merge=0/0, ticks=0/1215651, in_queue=1215651, util=99.01% 00:22:18.309 05:24:33 -- target/multiconnection.sh@36 -- # sync 00:22:18.309 05:24:33 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:18.309 05:24:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.309 05:24:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:18.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:18.567 05:24:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:18.567 05:24:34 -- common/autotest_common.sh@1208 -- # local i=0 00:22:18.567 05:24:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:18.567 05:24:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:22:18.567 05:24:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:18.567 05:24:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:18.567 05:24:34 -- common/autotest_common.sh@1220 -- # return 0 00:22:18.567 05:24:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.567 05:24:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.567 05:24:34 -- common/autotest_common.sh@10 -- # set +x 00:22:18.567 05:24:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.567 05:24:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.567 05:24:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:19.501 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:19.501 05:24:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:19.501 05:24:35 -- common/autotest_common.sh@1208 -- # local i=0 00:22:19.501 05:24:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:19.501 05:24:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:22:19.501 05:24:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:19.501 05:24:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:19.501 05:24:35 -- common/autotest_common.sh@1220 -- # return 0 00:22:19.501 05:24:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:19.501 05:24:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.501 05:24:35 -- common/autotest_common.sh@10 -- # set +x 00:22:19.501 05:24:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.501 05:24:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:19.501 05:24:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:20.436 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:20.436 05:24:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:20.436 05:24:36 -- common/autotest_common.sh@1208 -- # local i=0 00:22:20.436 05:24:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:20.436 05:24:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:22:20.436 05:24:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:20.436 05:24:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:20.436 05:24:36 -- common/autotest_common.sh@1220 -- # return 0 00:22:20.436 05:24:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:20.436 05:24:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.436 05:24:36 -- common/autotest_common.sh@10 -- # set +x 00:22:20.436 05:24:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.436 05:24:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.436 05:24:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:21.371 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:21.371 05:24:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:21.371 05:24:37 -- common/autotest_common.sh@1208 -- # local i=0 00:22:21.371 05:24:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:21.371 05:24:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:22:21.371 05:24:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:21.371 05:24:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:21.371 05:24:37 -- common/autotest_common.sh@1220 -- # return 0 00:22:21.371 05:24:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:21.371 05:24:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.371 05:24:37 -- common/autotest_common.sh@10 -- # set +x 00:22:21.629 05:24:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.629 05:24:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:21.629 05:24:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:22.565 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:22.565 05:24:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:22.565 05:24:38 -- common/autotest_common.sh@1208 -- # local i=0 00:22:22.565 05:24:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:22.565 05:24:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:22:22.565 05:24:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:22.565 05:24:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:22.565 05:24:38 -- common/autotest_common.sh@1220 -- # return 0 00:22:22.565 05:24:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:22.565 05:24:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.565 05:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:22.565 05:24:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.565 05:24:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:22.565 05:24:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:23.497 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:23.497 05:24:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:23.497 05:24:39 -- common/autotest_common.sh@1208 -- # local i=0 00:22:23.498 05:24:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:23.498 05:24:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:22:23.498 05:24:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:23.498 05:24:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:23.498 05:24:39 -- common/autotest_common.sh@1220 -- # return 0 00:22:23.498 05:24:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:23.498 05:24:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.498 05:24:39 -- common/autotest_common.sh@10 -- # set +x 00:22:23.498 05:24:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.498 05:24:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.498 05:24:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:24.433 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:24.433 05:24:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:24.433 05:24:40 -- common/autotest_common.sh@1208 -- # local i=0 00:22:24.433 05:24:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:24.433 05:24:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:24.433 05:24:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:24.433 05:24:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:24.433 05:24:40 -- common/autotest_common.sh@1220 -- # return 0 00:22:24.433 05:24:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:24.433 05:24:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.433 05:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.433 05:24:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.433 05:24:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.433 05:24:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:25.368 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:25.368 05:24:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:25.368 05:24:41 -- common/autotest_common.sh@1208 -- # local i=0 00:22:25.368 05:24:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:25.368 05:24:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:25.368 05:24:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:25.368 05:24:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:25.368 05:24:41 -- common/autotest_common.sh@1220 -- # return 0 00:22:25.368 05:24:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:25.368 05:24:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.368 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.626 05:24:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.626 05:24:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:25.626 05:24:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:26.561 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:26.561 05:24:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:26.561 05:24:42 -- common/autotest_common.sh@1208 -- # local i=0 00:22:26.561 05:24:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:26.561 05:24:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:26.561 05:24:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:26.561 05:24:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:26.562 05:24:42 -- common/autotest_common.sh@1220 -- # return 0 00:22:26.562 05:24:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:26.562 05:24:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.562 05:24:42 -- common/autotest_common.sh@10 -- # set +x 00:22:26.562 05:24:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.562 05:24:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.562 05:24:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:27.497 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:27.498 05:24:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:27.498 05:24:43 -- common/autotest_common.sh@1208 -- # local i=0 00:22:27.498 05:24:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:27.498 05:24:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:27.498 05:24:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:27.498 05:24:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:27.498 05:24:43 -- common/autotest_common.sh@1220 -- # return 0 00:22:27.498 05:24:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:27.498 05:24:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.498 05:24:43 -- common/autotest_common.sh@10 -- # set +x 00:22:27.498 05:24:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.498 05:24:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.498 05:24:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:28.433 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:28.433 05:24:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:28.433 05:24:44 -- common/autotest_common.sh@1208 -- # local i=0 00:22:28.433 05:24:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:28.433 05:24:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:28.433 05:24:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:28.433 05:24:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:28.433 05:24:44 -- common/autotest_common.sh@1220 -- # return 0 00:22:28.433 05:24:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:28.433 05:24:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.433 05:24:44 -- common/autotest_common.sh@10 -- # set +x 00:22:28.433 05:24:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.433 05:24:44 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:28.433 05:24:44 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:28.433 05:24:44 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:28.433 05:24:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:28.433 05:24:44 -- nvmf/common.sh@116 -- # sync 00:22:28.433 05:24:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:28.433 05:24:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:28.433 05:24:44 -- nvmf/common.sh@119 -- # set +e 00:22:28.433 05:24:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:28.433 05:24:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:28.433 rmmod nvme_rdma 00:22:28.433 rmmod nvme_fabrics 00:22:28.692 05:24:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:28.692 05:24:45 -- nvmf/common.sh@123 -- # set -e 00:22:28.692 05:24:45 -- nvmf/common.sh@124 -- # return 0 00:22:28.692 05:24:45 -- nvmf/common.sh@477 -- # '[' -n 1864662 ']' 00:22:28.692 05:24:45 -- nvmf/common.sh@478 -- # killprocess 1864662 00:22:28.692 05:24:45 -- common/autotest_common.sh@936 -- # '[' -z 1864662 ']' 00:22:28.692 05:24:45 -- common/autotest_common.sh@940 -- # kill -0 1864662 00:22:28.692 05:24:45 -- common/autotest_common.sh@941 -- # uname 00:22:28.692 05:24:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:28.692 05:24:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1864662 00:22:28.692 05:24:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:28.692 05:24:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:28.692 05:24:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1864662' 00:22:28.692 killing process with pid 1864662 00:22:28.692 05:24:45 -- common/autotest_common.sh@955 -- # kill 1864662 00:22:28.692 05:24:45 -- common/autotest_common.sh@960 -- # wait 1864662 00:22:29.261 05:24:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:29.261 05:24:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:29.261 00:22:29.261 real 1m15.456s 00:22:29.261 user 4m54.498s 00:22:29.261 sys 0m19.795s 00:22:29.261 05:24:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:29.261 05:24:45 -- common/autotest_common.sh@10 -- # set +x 00:22:29.261 ************************************ 00:22:29.261 END TEST nvmf_multiconnection 00:22:29.261 ************************************ 00:22:29.261 05:24:45 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:29.261 05:24:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:29.261 05:24:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:29.261 05:24:45 -- common/autotest_common.sh@10 -- # set +x 00:22:29.261 ************************************ 00:22:29.261 START TEST nvmf_initiator_timeout 00:22:29.261 ************************************ 00:22:29.261 05:24:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:29.261 * Looking for test storage... 00:22:29.261 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:29.261 05:24:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:29.261 05:24:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:29.261 05:24:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:29.261 05:24:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:29.261 05:24:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:29.261 05:24:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:29.261 05:24:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:29.261 05:24:45 -- scripts/common.sh@335 -- # IFS=.-: 00:22:29.261 05:24:45 -- scripts/common.sh@335 -- # read -ra ver1 00:22:29.261 05:24:45 -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.261 05:24:45 -- scripts/common.sh@336 -- # read -ra ver2 00:22:29.261 05:24:45 -- scripts/common.sh@337 -- # local 'op=<' 00:22:29.261 05:24:45 -- scripts/common.sh@339 -- # ver1_l=2 00:22:29.261 05:24:45 -- scripts/common.sh@340 -- # ver2_l=1 00:22:29.261 05:24:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:29.261 05:24:45 -- scripts/common.sh@343 -- # case "$op" in 00:22:29.261 05:24:45 -- scripts/common.sh@344 -- # : 1 00:22:29.261 05:24:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:29.261 05:24:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.261 05:24:45 -- scripts/common.sh@364 -- # decimal 1 00:22:29.261 05:24:45 -- scripts/common.sh@352 -- # local d=1 00:22:29.261 05:24:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.261 05:24:45 -- scripts/common.sh@354 -- # echo 1 00:22:29.261 05:24:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:29.261 05:24:45 -- scripts/common.sh@365 -- # decimal 2 00:22:29.261 05:24:45 -- scripts/common.sh@352 -- # local d=2 00:22:29.261 05:24:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.261 05:24:45 -- scripts/common.sh@354 -- # echo 2 00:22:29.261 05:24:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:29.261 05:24:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:29.261 05:24:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:29.261 05:24:45 -- scripts/common.sh@367 -- # return 0 00:22:29.261 05:24:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.261 05:24:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.261 --rc genhtml_branch_coverage=1 00:22:29.261 --rc genhtml_function_coverage=1 00:22:29.261 --rc genhtml_legend=1 00:22:29.261 --rc geninfo_all_blocks=1 00:22:29.261 --rc geninfo_unexecuted_blocks=1 00:22:29.261 00:22:29.261 ' 00:22:29.261 05:24:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.261 --rc genhtml_branch_coverage=1 00:22:29.261 --rc genhtml_function_coverage=1 00:22:29.261 --rc genhtml_legend=1 00:22:29.261 --rc geninfo_all_blocks=1 00:22:29.261 --rc geninfo_unexecuted_blocks=1 00:22:29.261 00:22:29.261 ' 00:22:29.261 05:24:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.261 --rc genhtml_branch_coverage=1 00:22:29.261 --rc genhtml_function_coverage=1 00:22:29.261 --rc genhtml_legend=1 00:22:29.261 --rc geninfo_all_blocks=1 00:22:29.261 --rc geninfo_unexecuted_blocks=1 00:22:29.261 00:22:29.261 ' 00:22:29.261 05:24:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:29.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.261 --rc genhtml_branch_coverage=1 00:22:29.261 --rc genhtml_function_coverage=1 00:22:29.261 --rc genhtml_legend=1 00:22:29.261 --rc geninfo_all_blocks=1 00:22:29.261 --rc geninfo_unexecuted_blocks=1 00:22:29.261 00:22:29.261 ' 00:22:29.261 05:24:45 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.261 05:24:45 -- nvmf/common.sh@7 -- # uname -s 00:22:29.261 05:24:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.261 05:24:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.261 05:24:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.261 05:24:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.261 05:24:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.261 05:24:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.261 05:24:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.261 05:24:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.261 05:24:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.261 05:24:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.261 05:24:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:29.261 05:24:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:29.261 05:24:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.261 05:24:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.261 05:24:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.261 05:24:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:29.262 05:24:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.262 05:24:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.262 05:24:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.262 05:24:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.262 05:24:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.262 05:24:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.262 05:24:45 -- paths/export.sh@5 -- # export PATH 00:22:29.262 05:24:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.262 05:24:45 -- nvmf/common.sh@46 -- # : 0 00:22:29.262 05:24:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:29.262 05:24:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:29.262 05:24:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:29.262 05:24:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.262 05:24:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.262 05:24:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:29.262 05:24:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:29.262 05:24:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:29.262 05:24:45 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.262 05:24:45 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.262 05:24:45 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:29.262 05:24:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:29.262 05:24:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.262 05:24:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:29.262 05:24:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:29.262 05:24:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:29.262 05:24:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.262 05:24:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.262 05:24:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.262 05:24:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:29.262 05:24:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:29.262 05:24:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:29.262 05:24:45 -- common/autotest_common.sh@10 -- # set +x 00:22:35.831 05:24:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:35.831 05:24:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:35.831 05:24:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:35.831 05:24:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:35.831 05:24:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:35.831 05:24:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:35.831 05:24:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:35.831 05:24:52 -- nvmf/common.sh@294 -- # net_devs=() 00:22:35.831 05:24:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:35.831 05:24:52 -- nvmf/common.sh@295 -- # e810=() 00:22:35.831 05:24:52 -- nvmf/common.sh@295 -- # local -ga e810 00:22:35.831 05:24:52 -- nvmf/common.sh@296 -- # x722=() 00:22:35.831 05:24:52 -- nvmf/common.sh@296 -- # local -ga x722 00:22:35.831 05:24:52 -- nvmf/common.sh@297 -- # mlx=() 00:22:35.831 05:24:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:35.831 05:24:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.831 05:24:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:35.831 05:24:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:35.831 05:24:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:35.831 05:24:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:35.831 05:24:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:35.831 05:24:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:35.831 05:24:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:35.831 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:35.831 05:24:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:35.831 05:24:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:35.831 05:24:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:35.831 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:35.831 05:24:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:35.831 05:24:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:35.831 05:24:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:35.831 05:24:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:35.831 05:24:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.831 05:24:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:35.831 05:24:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.831 05:24:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:35.831 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:35.831 05:24:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.831 05:24:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:35.831 05:24:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.831 05:24:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:35.831 05:24:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.831 05:24:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:35.831 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:35.832 05:24:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.832 05:24:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:35.832 05:24:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:35.832 05:24:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:35.832 05:24:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:35.832 05:24:52 -- nvmf/common.sh@57 -- # uname 00:22:35.832 05:24:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:35.832 05:24:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:35.832 05:24:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:35.832 05:24:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:35.832 05:24:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:35.832 05:24:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:35.832 05:24:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:35.832 05:24:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:35.832 05:24:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:35.832 05:24:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:35.832 05:24:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:35.832 05:24:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:35.832 05:24:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:35.832 05:24:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:35.832 05:24:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:35.832 05:24:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:35.832 05:24:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:35.832 05:24:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.832 05:24:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:35.832 05:24:52 -- nvmf/common.sh@104 -- # continue 2 00:22:35.832 05:24:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:35.832 05:24:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.832 05:24:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.832 05:24:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:35.832 05:24:52 -- nvmf/common.sh@104 -- # continue 2 00:22:35.832 05:24:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:35.832 05:24:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:35.832 05:24:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:35.832 05:24:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:35.832 05:24:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:35.832 05:24:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:35.832 05:24:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:35.832 05:24:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:35.832 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:35.832 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:35.832 altname enp217s0f0np0 00:22:35.832 altname ens818f0np0 00:22:35.832 inet 192.168.100.8/24 scope global mlx_0_0 00:22:35.832 valid_lft forever preferred_lft forever 00:22:35.832 05:24:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:35.832 05:24:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:35.832 05:24:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:35.832 05:24:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:35.832 05:24:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:35.832 05:24:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:35.832 05:24:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:35.832 05:24:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:35.832 05:24:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:36.092 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.092 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:36.092 altname enp217s0f1np1 00:22:36.092 altname ens818f1np1 00:22:36.092 inet 192.168.100.9/24 scope global mlx_0_1 00:22:36.092 valid_lft forever preferred_lft forever 00:22:36.092 05:24:52 -- nvmf/common.sh@410 -- # return 0 00:22:36.092 05:24:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:36.092 05:24:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:36.092 05:24:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:36.092 05:24:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:36.092 05:24:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:36.092 05:24:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.092 05:24:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:36.092 05:24:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:36.092 05:24:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.092 05:24:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:36.092 05:24:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:36.092 05:24:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.092 05:24:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.092 05:24:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:36.092 05:24:52 -- nvmf/common.sh@104 -- # continue 2 00:22:36.092 05:24:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:36.092 05:24:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.092 05:24:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.092 05:24:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.092 05:24:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.092 05:24:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:36.093 05:24:52 -- nvmf/common.sh@104 -- # continue 2 00:22:36.093 05:24:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:36.093 05:24:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:36.093 05:24:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:36.093 05:24:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:36.093 05:24:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:36.093 05:24:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:36.093 05:24:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:36.093 05:24:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:36.093 05:24:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:36.093 05:24:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:36.093 05:24:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:36.093 05:24:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:36.093 05:24:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:36.093 192.168.100.9' 00:22:36.093 05:24:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:36.093 192.168.100.9' 00:22:36.093 05:24:52 -- nvmf/common.sh@445 -- # head -n 1 00:22:36.093 05:24:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:36.093 05:24:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:36.093 192.168.100.9' 00:22:36.093 05:24:52 -- nvmf/common.sh@446 -- # tail -n +2 00:22:36.093 05:24:52 -- nvmf/common.sh@446 -- # head -n 1 00:22:36.093 05:24:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:36.093 05:24:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:36.093 05:24:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:36.093 05:24:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:36.093 05:24:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:36.093 05:24:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:36.093 05:24:52 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:36.093 05:24:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:36.093 05:24:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.093 05:24:52 -- common/autotest_common.sh@10 -- # set +x 00:22:36.093 05:24:52 -- nvmf/common.sh@469 -- # nvmfpid=1880365 00:22:36.093 05:24:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:36.093 05:24:52 -- nvmf/common.sh@470 -- # waitforlisten 1880365 00:22:36.093 05:24:52 -- common/autotest_common.sh@829 -- # '[' -z 1880365 ']' 00:22:36.093 05:24:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.093 05:24:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.093 05:24:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.093 05:24:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.093 05:24:52 -- common/autotest_common.sh@10 -- # set +x 00:22:36.093 [2024-11-19 05:24:52.558733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:36.093 [2024-11-19 05:24:52.558787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.093 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.093 [2024-11-19 05:24:52.630686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.352 [2024-11-19 05:24:52.669438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:36.352 [2024-11-19 05:24:52.669557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.352 [2024-11-19 05:24:52.669569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.352 [2024-11-19 05:24:52.669579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.352 [2024-11-19 05:24:52.669626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.352 [2024-11-19 05:24:52.669725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.352 [2024-11-19 05:24:52.669789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.352 [2024-11-19 05:24:52.669791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.919 05:24:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.919 05:24:53 -- common/autotest_common.sh@862 -- # return 0 00:22:36.919 05:24:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:36.919 05:24:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.919 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:22:36.919 05:24:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.919 05:24:53 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:36.919 05:24:53 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:36.919 05:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.919 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:22:36.919 Malloc0 00:22:36.919 05:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.919 05:24:53 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:36.919 05:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.919 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:22:36.919 Delay0 00:22:36.919 05:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.919 05:24:53 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:36.919 05:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.919 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:22:37.178 [2024-11-19 05:24:53.491982] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa4080/0x1aaecc0) succeed. 00:22:37.178 [2024-11-19 05:24:53.501300] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa5670/0x1af0360) succeed. 00:22:37.178 05:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.178 05:24:53 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:37.178 05:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.178 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:22:37.178 05:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.178 05:24:53 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:37.178 05:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.178 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:22:37.178 05:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.178 05:24:53 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:37.178 05:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.178 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:22:37.178 [2024-11-19 05:24:53.643898] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:37.178 05:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.178 05:24:53 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:38.114 05:24:54 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:38.114 05:24:54 -- common/autotest_common.sh@1187 -- # local i=0 00:22:38.114 05:24:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:38.114 05:24:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:38.114 05:24:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:40.643 05:24:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:40.643 05:24:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:40.643 05:24:56 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:40.643 05:24:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:40.643 05:24:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:40.643 05:24:56 -- common/autotest_common.sh@1197 -- # return 0 00:22:40.643 05:24:56 -- target/initiator_timeout.sh@35 -- # fio_pid=1881082 00:22:40.643 05:24:56 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:40.643 05:24:56 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:40.643 [global] 00:22:40.643 thread=1 00:22:40.643 invalidate=1 00:22:40.643 rw=write 00:22:40.643 time_based=1 00:22:40.643 runtime=60 00:22:40.643 ioengine=libaio 00:22:40.643 direct=1 00:22:40.643 bs=4096 00:22:40.643 iodepth=1 00:22:40.643 norandommap=0 00:22:40.643 numjobs=1 00:22:40.643 00:22:40.643 verify_dump=1 00:22:40.643 verify_backlog=512 00:22:40.643 verify_state_save=0 00:22:40.643 do_verify=1 00:22:40.643 verify=crc32c-intel 00:22:40.643 [job0] 00:22:40.643 filename=/dev/nvme0n1 00:22:40.643 Could not set queue depth (nvme0n1) 00:22:40.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:40.643 fio-3.35 00:22:40.643 Starting 1 thread 00:22:43.171 05:24:59 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:43.171 05:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.171 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:22:43.171 true 00:22:43.171 05:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.171 05:24:59 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:43.171 05:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.171 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:22:43.171 true 00:22:43.171 05:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.171 05:24:59 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:43.171 05:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.171 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:22:43.171 true 00:22:43.171 05:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.171 05:24:59 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:43.171 05:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.171 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:22:43.171 true 00:22:43.171 05:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.171 05:24:59 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:46.453 05:25:02 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:46.453 05:25:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.453 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:22:46.453 true 00:22:46.453 05:25:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.453 05:25:02 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:46.453 05:25:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.453 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:22:46.453 true 00:22:46.453 05:25:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.453 05:25:02 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:46.453 05:25:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.453 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:22:46.453 true 00:22:46.453 05:25:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.453 05:25:02 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:46.453 05:25:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.453 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:22:46.453 true 00:22:46.453 05:25:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.453 05:25:02 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:46.453 05:25:02 -- target/initiator_timeout.sh@54 -- # wait 1881082 00:23:42.760 00:23:42.760 job0: (groupid=0, jobs=1): err= 0: pid=1881338: Tue Nov 19 05:25:57 2024 00:23:42.760 read: IOPS=1237, BW=4949KiB/s (5068kB/s)(290MiB/60000msec) 00:23:42.760 slat (usec): min=2, max=959, avg= 9.24, stdev= 3.79 00:23:42.760 clat (usec): min=38, max=300, avg=104.80, stdev= 7.05 00:23:42.760 lat (usec): min=82, max=997, avg=114.04, stdev= 8.17 00:23:42.760 clat percentiles (usec): 00:23:42.760 | 1.00th=[ 90], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 99], 00:23:42.760 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:23:42.760 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 114], 95.00th=[ 117], 00:23:42.760 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 135], 00:23:42.760 | 99.99th=[ 198] 00:23:42.760 write: IOPS=1243, BW=4974KiB/s (5094kB/s)(291MiB/60000msec); 0 zone resets 00:23:42.760 slat (usec): min=3, max=11645, avg=12.17, stdev=55.94 00:23:42.760 clat (usec): min=38, max=42646k, avg=673.44, stdev=156125.71 00:23:42.760 lat (usec): min=80, max=42646k, avg=685.61, stdev=156125.69 00:23:42.760 clat percentiles (usec): 00:23:42.760 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:23:42.760 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 103], 00:23:42.760 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 114], 00:23:42.760 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 139], 00:23:42.760 | 99.99th=[ 273] 00:23:42.760 bw ( KiB/s): min= 3016, max=20480, per=100.00%, avg=16616.89, stdev=2903.28, samples=35 00:23:42.760 iops : min= 754, max= 5120, avg=4154.20, stdev=725.79, samples=35 00:23:42.760 lat (usec) : 50=0.01%, 100=31.62%, 250=68.37%, 500=0.01% 00:23:42.760 lat (msec) : >=2000=0.01% 00:23:42.760 cpu : usr=1.94%, sys=3.17%, ctx=148862, majf=0, minf=142 00:23:42.760 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.760 issued rwts: total=74240,74613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.760 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.760 00:23:42.760 Run status group 0 (all jobs): 00:23:42.760 READ: bw=4949KiB/s (5068kB/s), 4949KiB/s-4949KiB/s (5068kB/s-5068kB/s), io=290MiB (304MB), run=60000-60000msec 00:23:42.760 WRITE: bw=4974KiB/s (5094kB/s), 4974KiB/s-4974KiB/s (5094kB/s-5094kB/s), io=291MiB (306MB), run=60000-60000msec 00:23:42.760 00:23:42.760 Disk stats (read/write): 00:23:42.760 nvme0n1: ios=74046/74240, merge=0/0, ticks=7066/6917, in_queue=13983, util=99.77% 00:23:42.760 05:25:57 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:42.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:42.760 05:25:58 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:42.760 05:25:58 -- common/autotest_common.sh@1208 -- # local i=0 00:23:42.761 05:25:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:42.761 05:25:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:42.761 05:25:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:42.761 05:25:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:42.761 05:25:58 -- common/autotest_common.sh@1220 -- # return 0 00:23:42.761 05:25:58 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:42.761 05:25:58 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:42.761 nvmf hotplug test: fio successful as expected 00:23:42.761 05:25:58 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.761 05:25:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.761 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.761 05:25:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.761 05:25:58 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:42.761 05:25:58 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:42.761 05:25:58 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:42.761 05:25:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:42.761 05:25:58 -- nvmf/common.sh@116 -- # sync 00:23:42.761 05:25:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:42.761 05:25:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:42.761 05:25:58 -- nvmf/common.sh@119 -- # set +e 00:23:42.761 05:25:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:42.761 05:25:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:42.761 rmmod nvme_rdma 00:23:42.761 rmmod nvme_fabrics 00:23:42.761 05:25:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:42.761 05:25:58 -- nvmf/common.sh@123 -- # set -e 00:23:42.761 05:25:58 -- nvmf/common.sh@124 -- # return 0 00:23:42.761 05:25:58 -- nvmf/common.sh@477 -- # '[' -n 1880365 ']' 00:23:42.761 05:25:58 -- nvmf/common.sh@478 -- # killprocess 1880365 00:23:42.761 05:25:58 -- common/autotest_common.sh@936 -- # '[' -z 1880365 ']' 00:23:42.761 05:25:58 -- common/autotest_common.sh@940 -- # kill -0 1880365 00:23:42.761 05:25:58 -- common/autotest_common.sh@941 -- # uname 00:23:42.761 05:25:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:42.761 05:25:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1880365 00:23:42.761 05:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:42.761 05:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:42.761 05:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1880365' 00:23:42.761 killing process with pid 1880365 00:23:42.761 05:25:58 -- common/autotest_common.sh@955 -- # kill 1880365 00:23:42.761 05:25:58 -- common/autotest_common.sh@960 -- # wait 1880365 00:23:42.761 05:25:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:42.761 05:25:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:42.761 00:23:42.761 real 1m12.950s 00:23:42.761 user 4m34.353s 00:23:42.761 sys 0m7.852s 00:23:42.761 05:25:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:42.761 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.761 ************************************ 00:23:42.761 END TEST nvmf_initiator_timeout 00:23:42.761 ************************************ 00:23:42.761 05:25:58 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:42.761 05:25:58 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:42.761 05:25:58 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:42.761 05:25:58 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:42.761 05:25:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:42.761 05:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:42.761 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.761 ************************************ 00:23:42.761 START TEST nvmf_shutdown 00:23:42.761 ************************************ 00:23:42.761 05:25:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:42.761 * Looking for test storage... 00:23:42.761 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:42.761 05:25:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:42.761 05:25:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:42.761 05:25:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:42.761 05:25:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:42.761 05:25:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:42.761 05:25:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:42.761 05:25:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:42.761 05:25:58 -- scripts/common.sh@335 -- # IFS=.-: 00:23:42.761 05:25:58 -- scripts/common.sh@335 -- # read -ra ver1 00:23:42.761 05:25:58 -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.761 05:25:58 -- scripts/common.sh@336 -- # read -ra ver2 00:23:42.761 05:25:58 -- scripts/common.sh@337 -- # local 'op=<' 00:23:42.761 05:25:58 -- scripts/common.sh@339 -- # ver1_l=2 00:23:42.761 05:25:58 -- scripts/common.sh@340 -- # ver2_l=1 00:23:42.761 05:25:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:42.761 05:25:58 -- scripts/common.sh@343 -- # case "$op" in 00:23:42.761 05:25:58 -- scripts/common.sh@344 -- # : 1 00:23:42.761 05:25:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:42.761 05:25:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.761 05:25:58 -- scripts/common.sh@364 -- # decimal 1 00:23:42.761 05:25:58 -- scripts/common.sh@352 -- # local d=1 00:23:42.761 05:25:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.761 05:25:58 -- scripts/common.sh@354 -- # echo 1 00:23:42.761 05:25:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:42.761 05:25:58 -- scripts/common.sh@365 -- # decimal 2 00:23:42.761 05:25:58 -- scripts/common.sh@352 -- # local d=2 00:23:42.761 05:25:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.761 05:25:58 -- scripts/common.sh@354 -- # echo 2 00:23:42.761 05:25:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:42.761 05:25:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:42.761 05:25:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:42.761 05:25:58 -- scripts/common.sh@367 -- # return 0 00:23:42.761 05:25:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.761 05:25:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:42.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.761 --rc genhtml_branch_coverage=1 00:23:42.761 --rc genhtml_function_coverage=1 00:23:42.761 --rc genhtml_legend=1 00:23:42.761 --rc geninfo_all_blocks=1 00:23:42.761 --rc geninfo_unexecuted_blocks=1 00:23:42.761 00:23:42.761 ' 00:23:42.761 05:25:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:42.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.761 --rc genhtml_branch_coverage=1 00:23:42.761 --rc genhtml_function_coverage=1 00:23:42.761 --rc genhtml_legend=1 00:23:42.761 --rc geninfo_all_blocks=1 00:23:42.761 --rc geninfo_unexecuted_blocks=1 00:23:42.761 00:23:42.761 ' 00:23:42.761 05:25:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:42.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.761 --rc genhtml_branch_coverage=1 00:23:42.761 --rc genhtml_function_coverage=1 00:23:42.761 --rc genhtml_legend=1 00:23:42.761 --rc geninfo_all_blocks=1 00:23:42.761 --rc geninfo_unexecuted_blocks=1 00:23:42.761 00:23:42.761 ' 00:23:42.761 05:25:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:42.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.761 --rc genhtml_branch_coverage=1 00:23:42.761 --rc genhtml_function_coverage=1 00:23:42.761 --rc genhtml_legend=1 00:23:42.761 --rc geninfo_all_blocks=1 00:23:42.761 --rc geninfo_unexecuted_blocks=1 00:23:42.761 00:23:42.761 ' 00:23:42.761 05:25:58 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.761 05:25:58 -- nvmf/common.sh@7 -- # uname -s 00:23:42.761 05:25:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.761 05:25:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.761 05:25:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.761 05:25:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.761 05:25:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.761 05:25:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.761 05:25:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.761 05:25:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.761 05:25:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.761 05:25:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.761 05:25:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:42.761 05:25:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:42.761 05:25:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.761 05:25:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.761 05:25:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.761 05:25:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:42.761 05:25:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.761 05:25:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.761 05:25:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.761 05:25:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.762 05:25:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.762 05:25:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.762 05:25:58 -- paths/export.sh@5 -- # export PATH 00:23:42.762 05:25:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.762 05:25:58 -- nvmf/common.sh@46 -- # : 0 00:23:42.762 05:25:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:42.762 05:25:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:42.762 05:25:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:42.762 05:25:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.762 05:25:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.762 05:25:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:42.762 05:25:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:42.762 05:25:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:42.762 05:25:58 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.762 05:25:58 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.762 05:25:58 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:42.762 05:25:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:42.762 05:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:42.762 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.762 ************************************ 00:23:42.762 START TEST nvmf_shutdown_tc1 00:23:42.762 ************************************ 00:23:42.762 05:25:58 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:23:42.762 05:25:58 -- target/shutdown.sh@74 -- # starttarget 00:23:42.762 05:25:58 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:42.762 05:25:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:42.762 05:25:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.762 05:25:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:42.762 05:25:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:42.762 05:25:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:42.762 05:25:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.762 05:25:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.762 05:25:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.762 05:25:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:42.762 05:25:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:42.762 05:25:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:42.762 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:23:49.326 05:26:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:49.326 05:26:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:49.326 05:26:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:49.326 05:26:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:49.326 05:26:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:49.326 05:26:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:49.326 05:26:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:49.326 05:26:05 -- nvmf/common.sh@294 -- # net_devs=() 00:23:49.326 05:26:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:49.326 05:26:05 -- nvmf/common.sh@295 -- # e810=() 00:23:49.326 05:26:05 -- nvmf/common.sh@295 -- # local -ga e810 00:23:49.326 05:26:05 -- nvmf/common.sh@296 -- # x722=() 00:23:49.326 05:26:05 -- nvmf/common.sh@296 -- # local -ga x722 00:23:49.326 05:26:05 -- nvmf/common.sh@297 -- # mlx=() 00:23:49.326 05:26:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:49.326 05:26:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.326 05:26:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.326 05:26:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.326 05:26:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.326 05:26:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.327 05:26:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.327 05:26:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.327 05:26:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.327 05:26:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.327 05:26:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.327 05:26:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.327 05:26:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:49.327 05:26:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:49.327 05:26:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:49.327 05:26:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:49.327 05:26:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:49.327 05:26:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:49.327 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:49.327 05:26:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:49.327 05:26:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:49.327 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:49.327 05:26:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:49.327 05:26:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:49.327 05:26:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.327 05:26:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:49.327 05:26:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.327 05:26:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:49.327 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:49.327 05:26:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.327 05:26:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.327 05:26:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:49.327 05:26:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.327 05:26:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:49.327 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:49.327 05:26:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.327 05:26:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:49.327 05:26:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:49.327 05:26:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:49.327 05:26:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:49.327 05:26:05 -- nvmf/common.sh@57 -- # uname 00:23:49.327 05:26:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:49.327 05:26:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:49.327 05:26:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:49.327 05:26:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:49.327 05:26:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:49.327 05:26:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:49.327 05:26:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:49.327 05:26:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:49.327 05:26:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:49.327 05:26:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:49.327 05:26:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:49.327 05:26:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:49.327 05:26:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:49.327 05:26:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:49.327 05:26:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:49.327 05:26:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:49.327 05:26:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:49.327 05:26:05 -- nvmf/common.sh@104 -- # continue 2 00:23:49.327 05:26:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.327 05:26:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:49.327 05:26:05 -- nvmf/common.sh@104 -- # continue 2 00:23:49.327 05:26:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:49.327 05:26:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:49.327 05:26:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:49.327 05:26:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:49.327 05:26:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:49.327 05:26:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:49.327 05:26:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:49.327 05:26:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:49.327 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:49.327 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:49.327 altname enp217s0f0np0 00:23:49.327 altname ens818f0np0 00:23:49.327 inet 192.168.100.8/24 scope global mlx_0_0 00:23:49.327 valid_lft forever preferred_lft forever 00:23:49.327 05:26:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:49.327 05:26:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:49.327 05:26:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:49.327 05:26:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:49.327 05:26:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:49.327 05:26:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:49.327 05:26:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:49.327 05:26:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:49.327 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:49.327 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:49.327 altname enp217s0f1np1 00:23:49.327 altname ens818f1np1 00:23:49.327 inet 192.168.100.9/24 scope global mlx_0_1 00:23:49.327 valid_lft forever preferred_lft forever 00:23:49.327 05:26:05 -- nvmf/common.sh@410 -- # return 0 00:23:49.327 05:26:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:49.327 05:26:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:49.327 05:26:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:49.327 05:26:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:49.327 05:26:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:49.327 05:26:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:49.327 05:26:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:49.327 05:26:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:49.328 05:26:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:49.328 05:26:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:49.328 05:26:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:49.328 05:26:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.328 05:26:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:49.328 05:26:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:49.328 05:26:05 -- nvmf/common.sh@104 -- # continue 2 00:23:49.328 05:26:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:49.328 05:26:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.328 05:26:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:49.328 05:26:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.328 05:26:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:49.328 05:26:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:49.328 05:26:05 -- nvmf/common.sh@104 -- # continue 2 00:23:49.328 05:26:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:49.328 05:26:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:49.328 05:26:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:49.328 05:26:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:49.328 05:26:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:49.328 05:26:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:49.586 05:26:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:49.586 05:26:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:49.586 05:26:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:49.586 05:26:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:49.586 05:26:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:49.586 05:26:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:49.586 05:26:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:49.586 192.168.100.9' 00:23:49.586 05:26:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:49.586 192.168.100.9' 00:23:49.586 05:26:05 -- nvmf/common.sh@445 -- # head -n 1 00:23:49.586 05:26:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:49.586 05:26:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:49.586 192.168.100.9' 00:23:49.586 05:26:05 -- nvmf/common.sh@446 -- # tail -n +2 00:23:49.586 05:26:05 -- nvmf/common.sh@446 -- # head -n 1 00:23:49.586 05:26:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:49.586 05:26:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:49.586 05:26:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:49.586 05:26:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:49.586 05:26:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:49.586 05:26:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:49.586 05:26:05 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:49.586 05:26:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:49.586 05:26:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.586 05:26:05 -- common/autotest_common.sh@10 -- # set +x 00:23:49.586 05:26:05 -- nvmf/common.sh@469 -- # nvmfpid=1894924 00:23:49.586 05:26:05 -- nvmf/common.sh@470 -- # waitforlisten 1894924 00:23:49.586 05:26:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:49.586 05:26:05 -- common/autotest_common.sh@829 -- # '[' -z 1894924 ']' 00:23:49.586 05:26:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.586 05:26:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.586 05:26:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.586 05:26:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.586 05:26:05 -- common/autotest_common.sh@10 -- # set +x 00:23:49.586 [2024-11-19 05:26:05.999876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:49.586 [2024-11-19 05:26:05.999928] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.586 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.586 [2024-11-19 05:26:06.071824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.586 [2024-11-19 05:26:06.110083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:49.586 [2024-11-19 05:26:06.110194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.586 [2024-11-19 05:26:06.110204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.586 [2024-11-19 05:26:06.110213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.586 [2024-11-19 05:26:06.110249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.586 [2024-11-19 05:26:06.110334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.586 [2024-11-19 05:26:06.110445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.586 [2024-11-19 05:26:06.110446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:50.521 05:26:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.521 05:26:06 -- common/autotest_common.sh@862 -- # return 0 00:23:50.521 05:26:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:50.521 05:26:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:50.521 05:26:06 -- common/autotest_common.sh@10 -- # set +x 00:23:50.521 05:26:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.521 05:26:06 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:50.521 05:26:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.521 05:26:06 -- common/autotest_common.sh@10 -- # set +x 00:23:50.521 [2024-11-19 05:26:06.890377] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb764f0/0xb7a9e0) succeed. 00:23:50.521 [2024-11-19 05:26:06.899639] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb77ae0/0xbbc080) succeed. 00:23:50.521 05:26:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.521 05:26:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:50.521 05:26:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:50.521 05:26:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.521 05:26:07 -- common/autotest_common.sh@10 -- # set +x 00:23:50.521 05:26:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.521 05:26:07 -- target/shutdown.sh@28 -- # cat 00:23:50.521 05:26:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:50.521 05:26:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.521 05:26:07 -- common/autotest_common.sh@10 -- # set +x 00:23:50.779 Malloc1 00:23:50.779 [2024-11-19 05:26:07.125869] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:50.779 Malloc2 00:23:50.779 Malloc3 00:23:50.779 Malloc4 00:23:50.779 Malloc5 00:23:50.779 Malloc6 00:23:51.037 Malloc7 00:23:51.037 Malloc8 00:23:51.037 Malloc9 00:23:51.037 Malloc10 00:23:51.037 05:26:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.037 05:26:07 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:51.037 05:26:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.037 05:26:07 -- common/autotest_common.sh@10 -- # set +x 00:23:51.037 05:26:07 -- target/shutdown.sh@78 -- # perfpid=1895257 00:23:51.037 05:26:07 -- target/shutdown.sh@79 -- # waitforlisten 1895257 /var/tmp/bdevperf.sock 00:23:51.037 05:26:07 -- common/autotest_common.sh@829 -- # '[' -z 1895257 ']' 00:23:51.037 05:26:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.037 05:26:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.037 05:26:07 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:51.037 05:26:07 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:51.037 05:26:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.037 05:26:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.037 05:26:07 -- nvmf/common.sh@520 -- # config=() 00:23:51.037 05:26:07 -- common/autotest_common.sh@10 -- # set +x 00:23:51.037 05:26:07 -- nvmf/common.sh@520 -- # local subsystem config 00:23:51.037 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.037 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.037 { 00:23:51.037 "params": { 00:23:51.037 "name": "Nvme$subsystem", 00:23:51.037 "trtype": "$TEST_TRANSPORT", 00:23:51.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.038 "adrfam": "ipv4", 00:23:51.038 "trsvcid": "$NVMF_PORT", 00:23:51.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.038 "hdgst": ${hdgst:-false}, 00:23:51.038 "ddgst": ${ddgst:-false} 00:23:51.038 }, 00:23:51.038 "method": "bdev_nvme_attach_controller" 00:23:51.038 } 00:23:51.038 EOF 00:23:51.038 )") 00:23:51.038 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.038 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.038 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.038 { 00:23:51.038 "params": { 00:23:51.038 "name": "Nvme$subsystem", 00:23:51.038 "trtype": "$TEST_TRANSPORT", 00:23:51.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.038 "adrfam": "ipv4", 00:23:51.038 "trsvcid": "$NVMF_PORT", 00:23:51.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.038 "hdgst": ${hdgst:-false}, 00:23:51.038 "ddgst": ${ddgst:-false} 00:23:51.038 }, 00:23:51.038 "method": "bdev_nvme_attach_controller" 00:23:51.038 } 00:23:51.038 EOF 00:23:51.038 )") 00:23:51.038 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.038 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.038 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.038 { 00:23:51.038 "params": { 00:23:51.038 "name": "Nvme$subsystem", 00:23:51.038 "trtype": "$TEST_TRANSPORT", 00:23:51.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.038 "adrfam": "ipv4", 00:23:51.038 "trsvcid": "$NVMF_PORT", 00:23:51.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.038 "hdgst": ${hdgst:-false}, 00:23:51.038 "ddgst": ${ddgst:-false} 00:23:51.038 }, 00:23:51.038 "method": "bdev_nvme_attach_controller" 00:23:51.038 } 00:23:51.038 EOF 00:23:51.038 )") 00:23:51.038 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.038 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.038 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.038 { 00:23:51.038 "params": { 00:23:51.038 "name": "Nvme$subsystem", 00:23:51.038 "trtype": "$TEST_TRANSPORT", 00:23:51.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.038 "adrfam": "ipv4", 00:23:51.038 "trsvcid": "$NVMF_PORT", 00:23:51.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.038 "hdgst": ${hdgst:-false}, 00:23:51.038 "ddgst": ${ddgst:-false} 00:23:51.038 }, 00:23:51.038 "method": "bdev_nvme_attach_controller" 00:23:51.038 } 00:23:51.038 EOF 00:23:51.038 )") 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.296 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.296 { 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme$subsystem", 00:23:51.296 "trtype": "$TEST_TRANSPORT", 00:23:51.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "$NVMF_PORT", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.296 "hdgst": ${hdgst:-false}, 00:23:51.296 "ddgst": ${ddgst:-false} 00:23:51.296 }, 00:23:51.296 "method": "bdev_nvme_attach_controller" 00:23:51.296 } 00:23:51.296 EOF 00:23:51.296 )") 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.296 [2024-11-19 05:26:07.613188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:51.296 [2024-11-19 05:26:07.613243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:51.296 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.296 { 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme$subsystem", 00:23:51.296 "trtype": "$TEST_TRANSPORT", 00:23:51.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "$NVMF_PORT", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.296 "hdgst": ${hdgst:-false}, 00:23:51.296 "ddgst": ${ddgst:-false} 00:23:51.296 }, 00:23:51.296 "method": "bdev_nvme_attach_controller" 00:23:51.296 } 00:23:51.296 EOF 00:23:51.296 )") 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.296 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.296 { 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme$subsystem", 00:23:51.296 "trtype": "$TEST_TRANSPORT", 00:23:51.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "$NVMF_PORT", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.296 "hdgst": ${hdgst:-false}, 00:23:51.296 "ddgst": ${ddgst:-false} 00:23:51.296 }, 00:23:51.296 "method": "bdev_nvme_attach_controller" 00:23:51.296 } 00:23:51.296 EOF 00:23:51.296 )") 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.296 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.296 { 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme$subsystem", 00:23:51.296 "trtype": "$TEST_TRANSPORT", 00:23:51.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "$NVMF_PORT", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.296 "hdgst": ${hdgst:-false}, 00:23:51.296 "ddgst": ${ddgst:-false} 00:23:51.296 }, 00:23:51.296 "method": "bdev_nvme_attach_controller" 00:23:51.296 } 00:23:51.296 EOF 00:23:51.296 )") 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.296 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.296 { 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme$subsystem", 00:23:51.296 "trtype": "$TEST_TRANSPORT", 00:23:51.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "$NVMF_PORT", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.296 "hdgst": ${hdgst:-false}, 00:23:51.296 "ddgst": ${ddgst:-false} 00:23:51.296 }, 00:23:51.296 "method": "bdev_nvme_attach_controller" 00:23:51.296 } 00:23:51.296 EOF 00:23:51.296 )") 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.296 05:26:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.296 { 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme$subsystem", 00:23:51.296 "trtype": "$TEST_TRANSPORT", 00:23:51.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "$NVMF_PORT", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.296 "hdgst": ${hdgst:-false}, 00:23:51.296 "ddgst": ${ddgst:-false} 00:23:51.296 }, 00:23:51.296 "method": "bdev_nvme_attach_controller" 00:23:51.296 } 00:23:51.296 EOF 00:23:51.296 )") 00:23:51.296 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.296 05:26:07 -- nvmf/common.sh@542 -- # cat 00:23:51.296 05:26:07 -- nvmf/common.sh@544 -- # jq . 00:23:51.296 05:26:07 -- nvmf/common.sh@545 -- # IFS=, 00:23:51.296 05:26:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme1", 00:23:51.296 "trtype": "rdma", 00:23:51.296 "traddr": "192.168.100.8", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "4420", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.296 "hdgst": false, 00:23:51.296 "ddgst": false 00:23:51.296 }, 00:23:51.296 "method": "bdev_nvme_attach_controller" 00:23:51.296 },{ 00:23:51.296 "params": { 00:23:51.296 "name": "Nvme2", 00:23:51.296 "trtype": "rdma", 00:23:51.296 "traddr": "192.168.100.8", 00:23:51.296 "adrfam": "ipv4", 00:23:51.296 "trsvcid": "4420", 00:23:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme3", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme4", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme5", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme6", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme7", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme8", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme9", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 },{ 00:23:51.297 "params": { 00:23:51.297 "name": "Nvme10", 00:23:51.297 "trtype": "rdma", 00:23:51.297 "traddr": "192.168.100.8", 00:23:51.297 "adrfam": "ipv4", 00:23:51.297 "trsvcid": "4420", 00:23:51.297 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:51.297 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:51.297 "hdgst": false, 00:23:51.297 "ddgst": false 00:23:51.297 }, 00:23:51.297 "method": "bdev_nvme_attach_controller" 00:23:51.297 }' 00:23:51.297 [2024-11-19 05:26:07.687394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.297 [2024-11-19 05:26:07.723763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.670 05:26:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.670 05:26:09 -- common/autotest_common.sh@862 -- # return 0 00:23:52.670 05:26:09 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:52.670 05:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.670 05:26:09 -- common/autotest_common.sh@10 -- # set +x 00:23:52.670 05:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.670 05:26:09 -- target/shutdown.sh@83 -- # kill -9 1895257 00:23:52.670 05:26:09 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:52.670 05:26:09 -- target/shutdown.sh@87 -- # sleep 1 00:23:53.602 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1895257 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:53.602 05:26:10 -- target/shutdown.sh@88 -- # kill -0 1894924 00:23:53.602 05:26:10 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:53.602 05:26:10 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:53.602 05:26:10 -- nvmf/common.sh@520 -- # config=() 00:23:53.602 05:26:10 -- nvmf/common.sh@520 -- # local subsystem config 00:23:53.602 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.602 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.602 { 00:23:53.602 "params": { 00:23:53.602 "name": "Nvme$subsystem", 00:23:53.602 "trtype": "$TEST_TRANSPORT", 00:23:53.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.602 "adrfam": "ipv4", 00:23:53.602 "trsvcid": "$NVMF_PORT", 00:23:53.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.602 "hdgst": ${hdgst:-false}, 00:23:53.602 "ddgst": ${ddgst:-false} 00:23:53.602 }, 00:23:53.602 "method": "bdev_nvme_attach_controller" 00:23:53.602 } 00:23:53.602 EOF 00:23:53.602 )") 00:23:53.602 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.602 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.602 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.602 { 00:23:53.602 "params": { 00:23:53.602 "name": "Nvme$subsystem", 00:23:53.602 "trtype": "$TEST_TRANSPORT", 00:23:53.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.602 "adrfam": "ipv4", 00:23:53.602 "trsvcid": "$NVMF_PORT", 00:23:53.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.602 "hdgst": ${hdgst:-false}, 00:23:53.602 "ddgst": ${ddgst:-false} 00:23:53.602 }, 00:23:53.603 "method": "bdev_nvme_attach_controller" 00:23:53.603 } 00:23:53.603 EOF 00:23:53.603 )") 00:23:53.603 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.603 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.603 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.603 { 00:23:53.603 "params": { 00:23:53.603 "name": "Nvme$subsystem", 00:23:53.603 "trtype": "$TEST_TRANSPORT", 00:23:53.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.603 "adrfam": "ipv4", 00:23:53.603 "trsvcid": "$NVMF_PORT", 00:23:53.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.603 "hdgst": ${hdgst:-false}, 00:23:53.603 "ddgst": ${ddgst:-false} 00:23:53.603 }, 00:23:53.603 "method": "bdev_nvme_attach_controller" 00:23:53.603 } 00:23:53.603 EOF 00:23:53.603 )") 00:23:53.603 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.603 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.603 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.603 { 00:23:53.603 "params": { 00:23:53.603 "name": "Nvme$subsystem", 00:23:53.603 "trtype": "$TEST_TRANSPORT", 00:23:53.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.603 "adrfam": "ipv4", 00:23:53.603 "trsvcid": "$NVMF_PORT", 00:23:53.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.603 "hdgst": ${hdgst:-false}, 00:23:53.603 "ddgst": ${ddgst:-false} 00:23:53.603 }, 00:23:53.603 "method": "bdev_nvme_attach_controller" 00:23:53.603 } 00:23:53.603 EOF 00:23:53.603 )") 00:23:53.603 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.603 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.603 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.603 { 00:23:53.603 "params": { 00:23:53.603 "name": "Nvme$subsystem", 00:23:53.603 "trtype": "$TEST_TRANSPORT", 00:23:53.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.603 "adrfam": "ipv4", 00:23:53.603 "trsvcid": "$NVMF_PORT", 00:23:53.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.603 "hdgst": ${hdgst:-false}, 00:23:53.603 "ddgst": ${ddgst:-false} 00:23:53.603 }, 00:23:53.603 "method": "bdev_nvme_attach_controller" 00:23:53.603 } 00:23:53.603 EOF 00:23:53.603 )") 00:23:53.603 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.860 [2024-11-19 05:26:10.168305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:53.860 [2024-11-19 05:26:10.168361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895754 ] 00:23:53.860 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.860 { 00:23:53.860 "params": { 00:23:53.860 "name": "Nvme$subsystem", 00:23:53.860 "trtype": "$TEST_TRANSPORT", 00:23:53.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.860 "adrfam": "ipv4", 00:23:53.860 "trsvcid": "$NVMF_PORT", 00:23:53.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.860 "hdgst": ${hdgst:-false}, 00:23:53.860 "ddgst": ${ddgst:-false} 00:23:53.860 }, 00:23:53.860 "method": "bdev_nvme_attach_controller" 00:23:53.860 } 00:23:53.860 EOF 00:23:53.860 )") 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.860 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.860 { 00:23:53.860 "params": { 00:23:53.860 "name": "Nvme$subsystem", 00:23:53.860 "trtype": "$TEST_TRANSPORT", 00:23:53.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.860 "adrfam": "ipv4", 00:23:53.860 "trsvcid": "$NVMF_PORT", 00:23:53.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.860 "hdgst": ${hdgst:-false}, 00:23:53.860 "ddgst": ${ddgst:-false} 00:23:53.860 }, 00:23:53.860 "method": "bdev_nvme_attach_controller" 00:23:53.860 } 00:23:53.860 EOF 00:23:53.860 )") 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.860 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.860 { 00:23:53.860 "params": { 00:23:53.860 "name": "Nvme$subsystem", 00:23:53.860 "trtype": "$TEST_TRANSPORT", 00:23:53.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.860 "adrfam": "ipv4", 00:23:53.860 "trsvcid": "$NVMF_PORT", 00:23:53.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.860 "hdgst": ${hdgst:-false}, 00:23:53.860 "ddgst": ${ddgst:-false} 00:23:53.860 }, 00:23:53.860 "method": "bdev_nvme_attach_controller" 00:23:53.860 } 00:23:53.860 EOF 00:23:53.860 )") 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.860 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.860 { 00:23:53.860 "params": { 00:23:53.860 "name": "Nvme$subsystem", 00:23:53.860 "trtype": "$TEST_TRANSPORT", 00:23:53.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.860 "adrfam": "ipv4", 00:23:53.860 "trsvcid": "$NVMF_PORT", 00:23:53.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.860 "hdgst": ${hdgst:-false}, 00:23:53.860 "ddgst": ${ddgst:-false} 00:23:53.860 }, 00:23:53.860 "method": "bdev_nvme_attach_controller" 00:23:53.860 } 00:23:53.860 EOF 00:23:53.860 )") 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.860 05:26:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.860 05:26:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.860 { 00:23:53.860 "params": { 00:23:53.860 "name": "Nvme$subsystem", 00:23:53.860 "trtype": "$TEST_TRANSPORT", 00:23:53.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "$NVMF_PORT", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.861 "hdgst": ${hdgst:-false}, 00:23:53.861 "ddgst": ${ddgst:-false} 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 } 00:23:53.861 EOF 00:23:53.861 )") 00:23:53.861 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.861 05:26:10 -- nvmf/common.sh@542 -- # cat 00:23:53.861 05:26:10 -- nvmf/common.sh@544 -- # jq . 00:23:53.861 05:26:10 -- nvmf/common.sh@545 -- # IFS=, 00:23:53.861 05:26:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme1", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme2", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme3", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme4", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme5", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme6", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme7", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme8", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme9", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 },{ 00:23:53.861 "params": { 00:23:53.861 "name": "Nvme10", 00:23:53.861 "trtype": "rdma", 00:23:53.861 "traddr": "192.168.100.8", 00:23:53.861 "adrfam": "ipv4", 00:23:53.861 "trsvcid": "4420", 00:23:53.861 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:53.861 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:53.861 "hdgst": false, 00:23:53.861 "ddgst": false 00:23:53.861 }, 00:23:53.861 "method": "bdev_nvme_attach_controller" 00:23:53.861 }' 00:23:53.861 [2024-11-19 05:26:10.242461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.861 [2024-11-19 05:26:10.279149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.792 Running I/O for 1 seconds... 00:23:55.726 00:23:55.726 Latency(us) 00:23:55.726 [2024-11-19T04:26:12.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.726 [2024-11-19T04:26:12.284Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.726 Verification LBA range: start 0x0 length 0x400 00:23:55.726 Nvme1n1 : 1.10 747.45 46.72 0.00 0.00 84699.68 7392.46 78014.05 00:23:55.726 [2024-11-19T04:26:12.284Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.726 Verification LBA range: start 0x0 length 0x400 00:23:55.726 Nvme2n1 : 1.10 746.78 46.67 0.00 0.00 84166.94 7602.18 74658.61 00:23:55.726 [2024-11-19T04:26:12.284Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.726 Verification LBA range: start 0x0 length 0x400 00:23:55.726 Nvme3n1 : 1.11 746.11 46.63 0.00 0.00 83739.69 7811.89 72980.89 00:23:55.726 [2024-11-19T04:26:12.284Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.726 Verification LBA range: start 0x0 length 0x400 00:23:55.726 Nvme4n1 : 1.11 745.44 46.59 0.00 0.00 83304.69 8021.61 71722.60 00:23:55.726 [2024-11-19T04:26:12.284Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.726 Verification LBA range: start 0x0 length 0x400 00:23:55.726 Nvme5n1 : 1.11 705.98 44.12 0.00 0.00 87409.44 8178.89 109890.76 00:23:55.726 [2024-11-19T04:26:12.284Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.726 Verification LBA range: start 0x0 length 0x400 00:23:55.727 Nvme6n1 : 1.11 744.14 46.51 0.00 0.00 82472.89 8388.61 70044.88 00:23:55.727 [2024-11-19T04:26:12.285Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.727 Verification LBA range: start 0x0 length 0x400 00:23:55.727 Nvme7n1 : 1.11 743.47 46.47 0.00 0.00 82038.33 8545.89 71722.60 00:23:55.727 [2024-11-19T04:26:12.285Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.727 Verification LBA range: start 0x0 length 0x400 00:23:55.727 Nvme8n1 : 1.11 742.79 46.42 0.00 0.00 81620.11 8808.04 73400.32 00:23:55.727 [2024-11-19T04:26:12.285Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.727 Verification LBA range: start 0x0 length 0x400 00:23:55.727 Nvme9n1 : 1.11 660.35 41.27 0.00 0.00 91163.68 8912.90 161900.13 00:23:55.727 [2024-11-19T04:26:12.285Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.727 Verification LBA range: start 0x0 length 0x400 00:23:55.727 Nvme10n1 : 1.11 659.84 41.24 0.00 0.00 90609.90 7654.60 159383.55 00:23:55.727 [2024-11-19T04:26:12.285Z] =================================================================================================================== 00:23:55.727 [2024-11-19T04:26:12.285Z] Total : 7242.35 452.65 0.00 0.00 84979.55 7392.46 161900.13 00:23:55.985 05:26:12 -- target/shutdown.sh@93 -- # stoptarget 00:23:55.985 05:26:12 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:55.985 05:26:12 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:56.243 05:26:12 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.243 05:26:12 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:56.243 05:26:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:56.243 05:26:12 -- nvmf/common.sh@116 -- # sync 00:23:56.243 05:26:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:56.243 05:26:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:56.243 05:26:12 -- nvmf/common.sh@119 -- # set +e 00:23:56.243 05:26:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:56.243 05:26:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:56.243 rmmod nvme_rdma 00:23:56.243 rmmod nvme_fabrics 00:23:56.243 05:26:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:56.243 05:26:12 -- nvmf/common.sh@123 -- # set -e 00:23:56.243 05:26:12 -- nvmf/common.sh@124 -- # return 0 00:23:56.243 05:26:12 -- nvmf/common.sh@477 -- # '[' -n 1894924 ']' 00:23:56.243 05:26:12 -- nvmf/common.sh@478 -- # killprocess 1894924 00:23:56.243 05:26:12 -- common/autotest_common.sh@936 -- # '[' -z 1894924 ']' 00:23:56.243 05:26:12 -- common/autotest_common.sh@940 -- # kill -0 1894924 00:23:56.243 05:26:12 -- common/autotest_common.sh@941 -- # uname 00:23:56.243 05:26:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:56.243 05:26:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1894924 00:23:56.243 05:26:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:56.243 05:26:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:56.243 05:26:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1894924' 00:23:56.243 killing process with pid 1894924 00:23:56.243 05:26:12 -- common/autotest_common.sh@955 -- # kill 1894924 00:23:56.243 05:26:12 -- common/autotest_common.sh@960 -- # wait 1894924 00:23:56.810 05:26:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:56.810 05:26:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:56.810 00:23:56.810 real 0m14.290s 00:23:56.810 user 0m33.384s 00:23:56.810 sys 0m6.543s 00:23:56.810 05:26:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:56.810 05:26:13 -- common/autotest_common.sh@10 -- # set +x 00:23:56.810 ************************************ 00:23:56.810 END TEST nvmf_shutdown_tc1 00:23:56.810 ************************************ 00:23:56.810 05:26:13 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:56.810 05:26:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:56.810 05:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:56.810 05:26:13 -- common/autotest_common.sh@10 -- # set +x 00:23:56.810 ************************************ 00:23:56.810 START TEST nvmf_shutdown_tc2 00:23:56.810 ************************************ 00:23:56.810 05:26:13 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:23:56.810 05:26:13 -- target/shutdown.sh@98 -- # starttarget 00:23:56.810 05:26:13 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:56.810 05:26:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:56.810 05:26:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.810 05:26:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:56.810 05:26:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:56.810 05:26:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:56.810 05:26:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.810 05:26:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.810 05:26:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.810 05:26:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:56.810 05:26:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:56.810 05:26:13 -- common/autotest_common.sh@10 -- # set +x 00:23:56.810 05:26:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:56.810 05:26:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:56.810 05:26:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:56.810 05:26:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:56.810 05:26:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:56.810 05:26:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:56.810 05:26:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:56.810 05:26:13 -- nvmf/common.sh@294 -- # net_devs=() 00:23:56.810 05:26:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:56.810 05:26:13 -- nvmf/common.sh@295 -- # e810=() 00:23:56.810 05:26:13 -- nvmf/common.sh@295 -- # local -ga e810 00:23:56.810 05:26:13 -- nvmf/common.sh@296 -- # x722=() 00:23:56.810 05:26:13 -- nvmf/common.sh@296 -- # local -ga x722 00:23:56.810 05:26:13 -- nvmf/common.sh@297 -- # mlx=() 00:23:56.810 05:26:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:56.810 05:26:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.810 05:26:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:56.810 05:26:13 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:56.810 05:26:13 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:56.810 05:26:13 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:56.810 05:26:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:56.810 05:26:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:56.810 05:26:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:56.810 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:56.810 05:26:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:56.810 05:26:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:56.810 05:26:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:56.810 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:56.810 05:26:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:56.810 05:26:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:56.810 05:26:13 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:56.810 05:26:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:56.810 05:26:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.811 05:26:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:56.811 05:26:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.811 05:26:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:56.811 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.811 05:26:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.811 05:26:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:56.811 05:26:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.811 05:26:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:56.811 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.811 05:26:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:56.811 05:26:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:56.811 05:26:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:56.811 05:26:13 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:56.811 05:26:13 -- nvmf/common.sh@57 -- # uname 00:23:56.811 05:26:13 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:56.811 05:26:13 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:56.811 05:26:13 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:56.811 05:26:13 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:56.811 05:26:13 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:56.811 05:26:13 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:56.811 05:26:13 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:56.811 05:26:13 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:56.811 05:26:13 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:56.811 05:26:13 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:56.811 05:26:13 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:56.811 05:26:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:56.811 05:26:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:56.811 05:26:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:56.811 05:26:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:56.811 05:26:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:56.811 05:26:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@104 -- # continue 2 00:23:56.811 05:26:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@104 -- # continue 2 00:23:56.811 05:26:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:56.811 05:26:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.811 05:26:13 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:56.811 05:26:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:56.811 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:56.811 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:56.811 altname enp217s0f0np0 00:23:56.811 altname ens818f0np0 00:23:56.811 inet 192.168.100.8/24 scope global mlx_0_0 00:23:56.811 valid_lft forever preferred_lft forever 00:23:56.811 05:26:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:56.811 05:26:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.811 05:26:13 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:56.811 05:26:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:56.811 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:56.811 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:56.811 altname enp217s0f1np1 00:23:56.811 altname ens818f1np1 00:23:56.811 inet 192.168.100.9/24 scope global mlx_0_1 00:23:56.811 valid_lft forever preferred_lft forever 00:23:56.811 05:26:13 -- nvmf/common.sh@410 -- # return 0 00:23:56.811 05:26:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:56.811 05:26:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:56.811 05:26:13 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:56.811 05:26:13 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:56.811 05:26:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:56.811 05:26:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:56.811 05:26:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:56.811 05:26:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:56.811 05:26:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:56.811 05:26:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@104 -- # continue 2 00:23:56.811 05:26:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.811 05:26:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:56.811 05:26:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@104 -- # continue 2 00:23:56.811 05:26:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:56.811 05:26:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.811 05:26:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:56.811 05:26:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.811 05:26:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.811 05:26:13 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:56.811 192.168.100.9' 00:23:56.811 05:26:13 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:56.811 192.168.100.9' 00:23:56.811 05:26:13 -- nvmf/common.sh@445 -- # head -n 1 00:23:56.811 05:26:13 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:57.070 05:26:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:57.070 192.168.100.9' 00:23:57.070 05:26:13 -- nvmf/common.sh@446 -- # tail -n +2 00:23:57.070 05:26:13 -- nvmf/common.sh@446 -- # head -n 1 00:23:57.070 05:26:13 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:57.070 05:26:13 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:57.070 05:26:13 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:57.070 05:26:13 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:57.070 05:26:13 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:57.070 05:26:13 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:57.070 05:26:13 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:57.070 05:26:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.070 05:26:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.070 05:26:13 -- common/autotest_common.sh@10 -- # set +x 00:23:57.070 05:26:13 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:57.070 05:26:13 -- nvmf/common.sh@469 -- # nvmfpid=1896404 00:23:57.070 05:26:13 -- nvmf/common.sh@470 -- # waitforlisten 1896404 00:23:57.070 05:26:13 -- common/autotest_common.sh@829 -- # '[' -z 1896404 ']' 00:23:57.070 05:26:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.070 05:26:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.070 05:26:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.070 05:26:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.070 05:26:13 -- common/autotest_common.sh@10 -- # set +x 00:23:57.070 [2024-11-19 05:26:13.442203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:57.070 [2024-11-19 05:26:13.442251] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.070 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.070 [2024-11-19 05:26:13.514973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.070 [2024-11-19 05:26:13.552762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.070 [2024-11-19 05:26:13.552869] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.070 [2024-11-19 05:26:13.552878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.070 [2024-11-19 05:26:13.552886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.070 [2024-11-19 05:26:13.552988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.070 [2024-11-19 05:26:13.553060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.070 [2024-11-19 05:26:13.555544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.070 [2024-11-19 05:26:13.555548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.003 05:26:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.003 05:26:14 -- common/autotest_common.sh@862 -- # return 0 00:23:58.003 05:26:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:58.003 05:26:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.003 05:26:14 -- common/autotest_common.sh@10 -- # set +x 00:23:58.003 05:26:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.003 05:26:14 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:58.003 05:26:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.003 05:26:14 -- common/autotest_common.sh@10 -- # set +x 00:23:58.003 [2024-11-19 05:26:14.372158] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1eee4f0/0x1ef29e0) succeed. 00:23:58.003 [2024-11-19 05:26:14.382690] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eefae0/0x1f34080) succeed. 00:23:58.003 05:26:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.003 05:26:14 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:58.003 05:26:14 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:58.003 05:26:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.003 05:26:14 -- common/autotest_common.sh@10 -- # set +x 00:23:58.003 05:26:14 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.003 05:26:14 -- target/shutdown.sh@28 -- # cat 00:23:58.003 05:26:14 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:58.003 05:26:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.003 05:26:14 -- common/autotest_common.sh@10 -- # set +x 00:23:58.261 Malloc1 00:23:58.261 [2024-11-19 05:26:14.604580] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:58.261 Malloc2 00:23:58.261 Malloc3 00:23:58.261 Malloc4 00:23:58.261 Malloc5 00:23:58.261 Malloc6 00:23:58.519 Malloc7 00:23:58.519 Malloc8 00:23:58.519 Malloc9 00:23:58.519 Malloc10 00:23:58.519 05:26:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.519 05:26:14 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:58.519 05:26:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.519 05:26:14 -- common/autotest_common.sh@10 -- # set +x 00:23:58.519 05:26:15 -- target/shutdown.sh@102 -- # perfpid=1896717 00:23:58.519 05:26:15 -- target/shutdown.sh@103 -- # waitforlisten 1896717 /var/tmp/bdevperf.sock 00:23:58.519 05:26:15 -- common/autotest_common.sh@829 -- # '[' -z 1896717 ']' 00:23:58.519 05:26:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.519 05:26:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.519 05:26:15 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:58.519 05:26:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.519 05:26:15 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:58.519 05:26:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.519 05:26:15 -- common/autotest_common.sh@10 -- # set +x 00:23:58.519 05:26:15 -- nvmf/common.sh@520 -- # config=() 00:23:58.519 05:26:15 -- nvmf/common.sh@520 -- # local subsystem config 00:23:58.519 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.519 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.519 { 00:23:58.519 "params": { 00:23:58.519 "name": "Nvme$subsystem", 00:23:58.519 "trtype": "$TEST_TRANSPORT", 00:23:58.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.519 "adrfam": "ipv4", 00:23:58.519 "trsvcid": "$NVMF_PORT", 00:23:58.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.519 "hdgst": ${hdgst:-false}, 00:23:58.519 "ddgst": ${ddgst:-false} 00:23:58.519 }, 00:23:58.519 "method": "bdev_nvme_attach_controller" 00:23:58.519 } 00:23:58.519 EOF 00:23:58.519 )") 00:23:58.519 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.520 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.520 { 00:23:58.520 "params": { 00:23:58.520 "name": "Nvme$subsystem", 00:23:58.520 "trtype": "$TEST_TRANSPORT", 00:23:58.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.520 "adrfam": "ipv4", 00:23:58.520 "trsvcid": "$NVMF_PORT", 00:23:58.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.520 "hdgst": ${hdgst:-false}, 00:23:58.520 "ddgst": ${ddgst:-false} 00:23:58.520 }, 00:23:58.520 "method": "bdev_nvme_attach_controller" 00:23:58.520 } 00:23:58.520 EOF 00:23:58.520 )") 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.520 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.520 { 00:23:58.520 "params": { 00:23:58.520 "name": "Nvme$subsystem", 00:23:58.520 "trtype": "$TEST_TRANSPORT", 00:23:58.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.520 "adrfam": "ipv4", 00:23:58.520 "trsvcid": "$NVMF_PORT", 00:23:58.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.520 "hdgst": ${hdgst:-false}, 00:23:58.520 "ddgst": ${ddgst:-false} 00:23:58.520 }, 00:23:58.520 "method": "bdev_nvme_attach_controller" 00:23:58.520 } 00:23:58.520 EOF 00:23:58.520 )") 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.520 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.520 { 00:23:58.520 "params": { 00:23:58.520 "name": "Nvme$subsystem", 00:23:58.520 "trtype": "$TEST_TRANSPORT", 00:23:58.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.520 "adrfam": "ipv4", 00:23:58.520 "trsvcid": "$NVMF_PORT", 00:23:58.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.520 "hdgst": ${hdgst:-false}, 00:23:58.520 "ddgst": ${ddgst:-false} 00:23:58.520 }, 00:23:58.520 "method": "bdev_nvme_attach_controller" 00:23:58.520 } 00:23:58.520 EOF 00:23:58.520 )") 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.520 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.520 { 00:23:58.520 "params": { 00:23:58.520 "name": "Nvme$subsystem", 00:23:58.520 "trtype": "$TEST_TRANSPORT", 00:23:58.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.520 "adrfam": "ipv4", 00:23:58.520 "trsvcid": "$NVMF_PORT", 00:23:58.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.520 "hdgst": ${hdgst:-false}, 00:23:58.520 "ddgst": ${ddgst:-false} 00:23:58.520 }, 00:23:58.520 "method": "bdev_nvme_attach_controller" 00:23:58.520 } 00:23:58.520 EOF 00:23:58.520 )") 00:23:58.520 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.778 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.778 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.778 { 00:23:58.778 "params": { 00:23:58.778 "name": "Nvme$subsystem", 00:23:58.778 "trtype": "$TEST_TRANSPORT", 00:23:58.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.778 "adrfam": "ipv4", 00:23:58.778 "trsvcid": "$NVMF_PORT", 00:23:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.778 "hdgst": ${hdgst:-false}, 00:23:58.778 "ddgst": ${ddgst:-false} 00:23:58.778 }, 00:23:58.778 "method": "bdev_nvme_attach_controller" 00:23:58.778 } 00:23:58.778 EOF 00:23:58.778 )") 00:23:58.778 [2024-11-19 05:26:15.085976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:58.778 [2024-11-19 05:26:15.086030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896717 ] 00:23:58.778 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.778 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.778 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.778 { 00:23:58.778 "params": { 00:23:58.778 "name": "Nvme$subsystem", 00:23:58.778 "trtype": "$TEST_TRANSPORT", 00:23:58.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.778 "adrfam": "ipv4", 00:23:58.778 "trsvcid": "$NVMF_PORT", 00:23:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.779 "hdgst": ${hdgst:-false}, 00:23:58.779 "ddgst": ${ddgst:-false} 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 } 00:23:58.779 EOF 00:23:58.779 )") 00:23:58.779 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.779 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.779 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.779 { 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme$subsystem", 00:23:58.779 "trtype": "$TEST_TRANSPORT", 00:23:58.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "$NVMF_PORT", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.779 "hdgst": ${hdgst:-false}, 00:23:58.779 "ddgst": ${ddgst:-false} 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 } 00:23:58.779 EOF 00:23:58.779 )") 00:23:58.779 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.779 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.779 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.779 { 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme$subsystem", 00:23:58.779 "trtype": "$TEST_TRANSPORT", 00:23:58.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "$NVMF_PORT", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.779 "hdgst": ${hdgst:-false}, 00:23:58.779 "ddgst": ${ddgst:-false} 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 } 00:23:58.779 EOF 00:23:58.779 )") 00:23:58.779 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.779 05:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.779 05:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.779 { 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme$subsystem", 00:23:58.779 "trtype": "$TEST_TRANSPORT", 00:23:58.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "$NVMF_PORT", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.779 "hdgst": ${hdgst:-false}, 00:23:58.779 "ddgst": ${ddgst:-false} 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 } 00:23:58.779 EOF 00:23:58.779 )") 00:23:58.779 05:26:15 -- nvmf/common.sh@542 -- # cat 00:23:58.779 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.779 05:26:15 -- nvmf/common.sh@544 -- # jq . 00:23:58.779 05:26:15 -- nvmf/common.sh@545 -- # IFS=, 00:23:58.779 05:26:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme1", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme2", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme3", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme4", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme5", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme6", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme7", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme8", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme9", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 },{ 00:23:58.779 "params": { 00:23:58.779 "name": "Nvme10", 00:23:58.779 "trtype": "rdma", 00:23:58.779 "traddr": "192.168.100.8", 00:23:58.779 "adrfam": "ipv4", 00:23:58.779 "trsvcid": "4420", 00:23:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:58.779 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:58.779 "hdgst": false, 00:23:58.779 "ddgst": false 00:23:58.779 }, 00:23:58.779 "method": "bdev_nvme_attach_controller" 00:23:58.779 }' 00:23:58.779 [2024-11-19 05:26:15.159828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.779 [2024-11-19 05:26:15.196122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.712 Running I/O for 10 seconds... 00:24:00.278 05:26:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.278 05:26:16 -- common/autotest_common.sh@862 -- # return 0 00:24:00.278 05:26:16 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:00.278 05:26:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.278 05:26:16 -- common/autotest_common.sh@10 -- # set +x 00:24:00.278 05:26:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.278 05:26:16 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:00.278 05:26:16 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:00.278 05:26:16 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:00.278 05:26:16 -- target/shutdown.sh@57 -- # local ret=1 00:24:00.278 05:26:16 -- target/shutdown.sh@58 -- # local i 00:24:00.278 05:26:16 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:00.278 05:26:16 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:00.278 05:26:16 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:00.278 05:26:16 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:00.278 05:26:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.278 05:26:16 -- common/autotest_common.sh@10 -- # set +x 00:24:00.536 05:26:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.536 05:26:16 -- target/shutdown.sh@60 -- # read_io_count=461 00:24:00.536 05:26:16 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:24:00.536 05:26:16 -- target/shutdown.sh@64 -- # ret=0 00:24:00.536 05:26:16 -- target/shutdown.sh@65 -- # break 00:24:00.536 05:26:16 -- target/shutdown.sh@69 -- # return 0 00:24:00.536 05:26:16 -- target/shutdown.sh@109 -- # killprocess 1896717 00:24:00.536 05:26:16 -- common/autotest_common.sh@936 -- # '[' -z 1896717 ']' 00:24:00.536 05:26:16 -- common/autotest_common.sh@940 -- # kill -0 1896717 00:24:00.536 05:26:16 -- common/autotest_common.sh@941 -- # uname 00:24:00.536 05:26:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.536 05:26:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1896717 00:24:00.536 05:26:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:00.536 05:26:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:00.536 05:26:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1896717' 00:24:00.536 killing process with pid 1896717 00:24:00.536 05:26:16 -- common/autotest_common.sh@955 -- # kill 1896717 00:24:00.536 05:26:16 -- common/autotest_common.sh@960 -- # wait 1896717 00:24:00.536 Received shutdown signal, test time was about 0.927660 seconds 00:24:00.536 00:24:00.536 Latency(us) 00:24:00.536 [2024-11-19T04:26:17.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.536 [2024-11-19T04:26:17.094Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.536 Verification LBA range: start 0x0 length 0x400 00:24:00.536 Nvme1n1 : 0.92 714.62 44.66 0.00 0.00 88470.72 7497.32 109051.90 00:24:00.536 [2024-11-19T04:26:17.094Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.536 Verification LBA range: start 0x0 length 0x400 00:24:00.536 Nvme2n1 : 0.92 713.79 44.61 0.00 0.00 87792.98 7759.46 106115.89 00:24:00.536 [2024-11-19T04:26:17.094Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.536 Verification LBA range: start 0x0 length 0x400 00:24:00.536 Nvme3n1 : 0.92 719.49 44.97 0.00 0.00 86497.38 8074.04 103179.88 00:24:00.536 [2024-11-19T04:26:17.094Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.536 Verification LBA range: start 0x0 length 0x400 00:24:00.536 Nvme4n1 : 0.92 739.29 46.21 0.00 0.00 83589.25 8388.61 97727.28 00:24:00.536 [2024-11-19T04:26:17.094Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.536 Verification LBA range: start 0x0 length 0x400 00:24:00.536 Nvme5n1 : 0.92 745.05 46.57 0.00 0.00 82231.21 8493.47 93952.41 00:24:00.536 [2024-11-19T04:26:17.094Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.536 Verification LBA range: start 0x0 length 0x400 00:24:00.536 Nvme6n1 : 0.92 749.70 46.86 0.00 0.00 81141.45 8598.32 74239.18 00:24:00.536 [2024-11-19T04:26:17.094Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.536 Verification LBA range: start 0x0 length 0x400 00:24:00.537 Nvme7n1 : 0.92 748.93 46.81 0.00 0.00 80647.77 8755.61 72980.89 00:24:00.537 [2024-11-19T04:26:17.095Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.537 Verification LBA range: start 0x0 length 0x400 00:24:00.537 Nvme8n1 : 0.92 748.16 46.76 0.00 0.00 80148.18 8860.47 71303.17 00:24:00.537 [2024-11-19T04:26:17.095Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.537 Verification LBA range: start 0x0 length 0x400 00:24:00.537 Nvme9n1 : 0.93 747.40 46.71 0.00 0.00 79645.03 8965.32 71303.17 00:24:00.537 [2024-11-19T04:26:17.095Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.537 Verification LBA range: start 0x0 length 0x400 00:24:00.537 Nvme10n1 : 0.93 521.10 32.57 0.00 0.00 113362.95 7707.03 315411.66 00:24:00.537 [2024-11-19T04:26:17.095Z] =================================================================================================================== 00:24:00.537 [2024-11-19T04:26:17.095Z] Total : 7147.53 446.72 0.00 0.00 85484.11 7497.32 315411.66 00:24:00.795 05:26:17 -- target/shutdown.sh@112 -- # sleep 1 00:24:01.728 05:26:18 -- target/shutdown.sh@113 -- # kill -0 1896404 00:24:01.728 05:26:18 -- target/shutdown.sh@115 -- # stoptarget 00:24:01.728 05:26:18 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:01.728 05:26:18 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:01.728 05:26:18 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.728 05:26:18 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:01.728 05:26:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:01.728 05:26:18 -- nvmf/common.sh@116 -- # sync 00:24:01.728 05:26:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:01.728 05:26:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:01.728 05:26:18 -- nvmf/common.sh@119 -- # set +e 00:24:01.728 05:26:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:01.728 05:26:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:01.728 rmmod nvme_rdma 00:24:01.986 rmmod nvme_fabrics 00:24:01.986 05:26:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:01.986 05:26:18 -- nvmf/common.sh@123 -- # set -e 00:24:01.986 05:26:18 -- nvmf/common.sh@124 -- # return 0 00:24:01.986 05:26:18 -- nvmf/common.sh@477 -- # '[' -n 1896404 ']' 00:24:01.986 05:26:18 -- nvmf/common.sh@478 -- # killprocess 1896404 00:24:01.986 05:26:18 -- common/autotest_common.sh@936 -- # '[' -z 1896404 ']' 00:24:01.986 05:26:18 -- common/autotest_common.sh@940 -- # kill -0 1896404 00:24:01.986 05:26:18 -- common/autotest_common.sh@941 -- # uname 00:24:01.986 05:26:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.986 05:26:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1896404 00:24:01.986 05:26:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:01.986 05:26:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:01.986 05:26:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1896404' 00:24:01.986 killing process with pid 1896404 00:24:01.986 05:26:18 -- common/autotest_common.sh@955 -- # kill 1896404 00:24:01.986 05:26:18 -- common/autotest_common.sh@960 -- # wait 1896404 00:24:02.554 05:26:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:02.554 05:26:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:02.554 00:24:02.554 real 0m5.667s 00:24:02.554 user 0m23.199s 00:24:02.554 sys 0m1.172s 00:24:02.554 05:26:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:02.554 05:26:18 -- common/autotest_common.sh@10 -- # set +x 00:24:02.554 ************************************ 00:24:02.554 END TEST nvmf_shutdown_tc2 00:24:02.554 ************************************ 00:24:02.554 05:26:18 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:02.554 05:26:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:02.554 05:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.554 05:26:18 -- common/autotest_common.sh@10 -- # set +x 00:24:02.554 ************************************ 00:24:02.554 START TEST nvmf_shutdown_tc3 00:24:02.554 ************************************ 00:24:02.554 05:26:18 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:24:02.554 05:26:18 -- target/shutdown.sh@120 -- # starttarget 00:24:02.554 05:26:18 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:02.554 05:26:18 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:02.554 05:26:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.554 05:26:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.554 05:26:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.554 05:26:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.554 05:26:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.554 05:26:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.554 05:26:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.554 05:26:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.554 05:26:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.554 05:26:18 -- common/autotest_common.sh@10 -- # set +x 00:24:02.554 05:26:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:02.554 05:26:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:02.554 05:26:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:02.554 05:26:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:02.554 05:26:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:02.554 05:26:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:02.554 05:26:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:02.554 05:26:18 -- nvmf/common.sh@294 -- # net_devs=() 00:24:02.554 05:26:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:02.554 05:26:18 -- nvmf/common.sh@295 -- # e810=() 00:24:02.554 05:26:18 -- nvmf/common.sh@295 -- # local -ga e810 00:24:02.554 05:26:18 -- nvmf/common.sh@296 -- # x722=() 00:24:02.554 05:26:18 -- nvmf/common.sh@296 -- # local -ga x722 00:24:02.554 05:26:18 -- nvmf/common.sh@297 -- # mlx=() 00:24:02.554 05:26:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:02.554 05:26:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.554 05:26:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:02.554 05:26:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:02.554 05:26:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:02.554 05:26:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:02.554 05:26:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:02.554 05:26:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:02.554 05:26:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:02.554 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:02.554 05:26:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:02.554 05:26:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:02.554 05:26:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:02.554 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:02.554 05:26:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:02.554 05:26:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:02.554 05:26:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:02.554 05:26:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:02.554 05:26:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.554 05:26:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:02.554 05:26:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.554 05:26:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:02.554 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:02.554 05:26:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.554 05:26:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:02.554 05:26:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.555 05:26:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:02.555 05:26:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.555 05:26:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:02.555 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:02.555 05:26:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.555 05:26:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:02.555 05:26:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:02.555 05:26:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:02.555 05:26:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:02.555 05:26:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:02.555 05:26:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:02.555 05:26:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:02.555 05:26:18 -- nvmf/common.sh@57 -- # uname 00:24:02.555 05:26:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:02.555 05:26:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:02.555 05:26:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:02.555 05:26:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:02.555 05:26:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:02.555 05:26:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:02.555 05:26:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:02.555 05:26:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:02.555 05:26:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:02.555 05:26:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:02.555 05:26:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:02.555 05:26:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:02.555 05:26:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:02.555 05:26:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:02.555 05:26:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:02.555 05:26:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:02.555 05:26:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@104 -- # continue 2 00:24:02.555 05:26:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@104 -- # continue 2 00:24:02.555 05:26:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:02.555 05:26:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:02.555 05:26:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:02.555 05:26:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:02.555 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:02.555 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:02.555 altname enp217s0f0np0 00:24:02.555 altname ens818f0np0 00:24:02.555 inet 192.168.100.8/24 scope global mlx_0_0 00:24:02.555 valid_lft forever preferred_lft forever 00:24:02.555 05:26:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:02.555 05:26:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:02.555 05:26:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:02.555 05:26:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:02.555 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:02.555 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:02.555 altname enp217s0f1np1 00:24:02.555 altname ens818f1np1 00:24:02.555 inet 192.168.100.9/24 scope global mlx_0_1 00:24:02.555 valid_lft forever preferred_lft forever 00:24:02.555 05:26:19 -- nvmf/common.sh@410 -- # return 0 00:24:02.555 05:26:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:02.555 05:26:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:02.555 05:26:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:02.555 05:26:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:02.555 05:26:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:02.555 05:26:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:02.555 05:26:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:02.555 05:26:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:02.555 05:26:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:02.555 05:26:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@104 -- # continue 2 00:24:02.555 05:26:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.555 05:26:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:02.555 05:26:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@104 -- # continue 2 00:24:02.555 05:26:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:02.555 05:26:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:02.555 05:26:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:02.555 05:26:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:02.555 05:26:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:02.818 05:26:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:02.818 192.168.100.9' 00:24:02.818 05:26:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:02.818 192.168.100.9' 00:24:02.818 05:26:19 -- nvmf/common.sh@445 -- # head -n 1 00:24:02.818 05:26:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:02.818 05:26:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:02.818 192.168.100.9' 00:24:02.819 05:26:19 -- nvmf/common.sh@446 -- # tail -n +2 00:24:02.819 05:26:19 -- nvmf/common.sh@446 -- # head -n 1 00:24:02.819 05:26:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:02.819 05:26:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:02.819 05:26:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:02.819 05:26:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:02.819 05:26:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:02.819 05:26:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:02.819 05:26:19 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:02.819 05:26:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:02.819 05:26:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:02.819 05:26:19 -- common/autotest_common.sh@10 -- # set +x 00:24:02.819 05:26:19 -- nvmf/common.sh@469 -- # nvmfpid=1897634 00:24:02.819 05:26:19 -- nvmf/common.sh@470 -- # waitforlisten 1897634 00:24:02.819 05:26:19 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:02.819 05:26:19 -- common/autotest_common.sh@829 -- # '[' -z 1897634 ']' 00:24:02.819 05:26:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.819 05:26:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.819 05:26:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.819 05:26:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.819 05:26:19 -- common/autotest_common.sh@10 -- # set +x 00:24:02.819 [2024-11-19 05:26:19.221684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:02.819 [2024-11-19 05:26:19.221740] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.819 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.819 [2024-11-19 05:26:19.293393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.819 [2024-11-19 05:26:19.333424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:02.819 [2024-11-19 05:26:19.333540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.819 [2024-11-19 05:26:19.333550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.819 [2024-11-19 05:26:19.333558] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.819 [2024-11-19 05:26:19.333660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.819 [2024-11-19 05:26:19.333747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.819 [2024-11-19 05:26:19.333857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.819 [2024-11-19 05:26:19.333858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:03.762 05:26:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.762 05:26:20 -- common/autotest_common.sh@862 -- # return 0 00:24:03.762 05:26:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:03.762 05:26:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:03.762 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:24:03.762 05:26:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.762 05:26:20 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:03.762 05:26:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.762 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:24:03.762 [2024-11-19 05:26:20.109876] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa7b4f0/0xa7f9e0) succeed. 00:24:03.762 [2024-11-19 05:26:20.119129] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa7cae0/0xac1080) succeed. 00:24:03.762 05:26:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.762 05:26:20 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:03.762 05:26:20 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:03.762 05:26:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.762 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:24:03.762 05:26:20 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.762 05:26:20 -- target/shutdown.sh@28 -- # cat 00:24:03.762 05:26:20 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:03.762 05:26:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.762 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:24:03.762 Malloc1 00:24:04.020 [2024-11-19 05:26:20.341423] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:04.020 Malloc2 00:24:04.020 Malloc3 00:24:04.020 Malloc4 00:24:04.020 Malloc5 00:24:04.020 Malloc6 00:24:04.278 Malloc7 00:24:04.278 Malloc8 00:24:04.278 Malloc9 00:24:04.278 Malloc10 00:24:04.278 05:26:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.278 05:26:20 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:04.278 05:26:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.278 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:24:04.278 05:26:20 -- target/shutdown.sh@124 -- # perfpid=1897956 00:24:04.278 05:26:20 -- target/shutdown.sh@125 -- # waitforlisten 1897956 /var/tmp/bdevperf.sock 00:24:04.278 05:26:20 -- common/autotest_common.sh@829 -- # '[' -z 1897956 ']' 00:24:04.278 05:26:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.278 05:26:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.279 05:26:20 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:04.279 05:26:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.279 05:26:20 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:04.279 05:26:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.279 05:26:20 -- nvmf/common.sh@520 -- # config=() 00:24:04.279 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:24:04.279 05:26:20 -- nvmf/common.sh@520 -- # local subsystem config 00:24:04.279 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.279 { 00:24:04.279 "params": { 00:24:04.279 "name": "Nvme$subsystem", 00:24:04.279 "trtype": "$TEST_TRANSPORT", 00:24:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.279 "adrfam": "ipv4", 00:24:04.279 "trsvcid": "$NVMF_PORT", 00:24:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.279 "hdgst": ${hdgst:-false}, 00:24:04.279 "ddgst": ${ddgst:-false} 00:24:04.279 }, 00:24:04.279 "method": "bdev_nvme_attach_controller" 00:24:04.279 } 00:24:04.279 EOF 00:24:04.279 )") 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.279 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.279 { 00:24:04.279 "params": { 00:24:04.279 "name": "Nvme$subsystem", 00:24:04.279 "trtype": "$TEST_TRANSPORT", 00:24:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.279 "adrfam": "ipv4", 00:24:04.279 "trsvcid": "$NVMF_PORT", 00:24:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.279 "hdgst": ${hdgst:-false}, 00:24:04.279 "ddgst": ${ddgst:-false} 00:24:04.279 }, 00:24:04.279 "method": "bdev_nvme_attach_controller" 00:24:04.279 } 00:24:04.279 EOF 00:24:04.279 )") 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.279 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.279 { 00:24:04.279 "params": { 00:24:04.279 "name": "Nvme$subsystem", 00:24:04.279 "trtype": "$TEST_TRANSPORT", 00:24:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.279 "adrfam": "ipv4", 00:24:04.279 "trsvcid": "$NVMF_PORT", 00:24:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.279 "hdgst": ${hdgst:-false}, 00:24:04.279 "ddgst": ${ddgst:-false} 00:24:04.279 }, 00:24:04.279 "method": "bdev_nvme_attach_controller" 00:24:04.279 } 00:24:04.279 EOF 00:24:04.279 )") 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.279 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.279 { 00:24:04.279 "params": { 00:24:04.279 "name": "Nvme$subsystem", 00:24:04.279 "trtype": "$TEST_TRANSPORT", 00:24:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.279 "adrfam": "ipv4", 00:24:04.279 "trsvcid": "$NVMF_PORT", 00:24:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.279 "hdgst": ${hdgst:-false}, 00:24:04.279 "ddgst": ${ddgst:-false} 00:24:04.279 }, 00:24:04.279 "method": "bdev_nvme_attach_controller" 00:24:04.279 } 00:24:04.279 EOF 00:24:04.279 )") 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.279 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.279 { 00:24:04.279 "params": { 00:24:04.279 "name": "Nvme$subsystem", 00:24:04.279 "trtype": "$TEST_TRANSPORT", 00:24:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.279 "adrfam": "ipv4", 00:24:04.279 "trsvcid": "$NVMF_PORT", 00:24:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.279 "hdgst": ${hdgst:-false}, 00:24:04.279 "ddgst": ${ddgst:-false} 00:24:04.279 }, 00:24:04.279 "method": "bdev_nvme_attach_controller" 00:24:04.279 } 00:24:04.279 EOF 00:24:04.279 )") 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.279 [2024-11-19 05:26:20.830553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:04.279 [2024-11-19 05:26:20.830606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897956 ] 00:24:04.279 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.279 { 00:24:04.279 "params": { 00:24:04.279 "name": "Nvme$subsystem", 00:24:04.279 "trtype": "$TEST_TRANSPORT", 00:24:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.279 "adrfam": "ipv4", 00:24:04.279 "trsvcid": "$NVMF_PORT", 00:24:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.279 "hdgst": ${hdgst:-false}, 00:24:04.279 "ddgst": ${ddgst:-false} 00:24:04.279 }, 00:24:04.279 "method": "bdev_nvme_attach_controller" 00:24:04.279 } 00:24:04.279 EOF 00:24:04.279 )") 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.279 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.279 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.279 { 00:24:04.279 "params": { 00:24:04.279 "name": "Nvme$subsystem", 00:24:04.279 "trtype": "$TEST_TRANSPORT", 00:24:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.279 "adrfam": "ipv4", 00:24:04.279 "trsvcid": "$NVMF_PORT", 00:24:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.279 "hdgst": ${hdgst:-false}, 00:24:04.279 "ddgst": ${ddgst:-false} 00:24:04.279 }, 00:24:04.279 "method": "bdev_nvme_attach_controller" 00:24:04.279 } 00:24:04.279 EOF 00:24:04.279 )") 00:24:04.538 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.538 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.538 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.538 { 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme$subsystem", 00:24:04.538 "trtype": "$TEST_TRANSPORT", 00:24:04.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "$NVMF_PORT", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.538 "hdgst": ${hdgst:-false}, 00:24:04.538 "ddgst": ${ddgst:-false} 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 } 00:24:04.538 EOF 00:24:04.538 )") 00:24:04.538 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.538 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.538 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.538 { 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme$subsystem", 00:24:04.538 "trtype": "$TEST_TRANSPORT", 00:24:04.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "$NVMF_PORT", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.538 "hdgst": ${hdgst:-false}, 00:24:04.538 "ddgst": ${ddgst:-false} 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 } 00:24:04.538 EOF 00:24:04.538 )") 00:24:04.538 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.538 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.538 05:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.538 05:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.538 { 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme$subsystem", 00:24:04.538 "trtype": "$TEST_TRANSPORT", 00:24:04.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "$NVMF_PORT", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.538 "hdgst": ${hdgst:-false}, 00:24:04.538 "ddgst": ${ddgst:-false} 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 } 00:24:04.538 EOF 00:24:04.538 )") 00:24:04.538 05:26:20 -- nvmf/common.sh@542 -- # cat 00:24:04.538 05:26:20 -- nvmf/common.sh@544 -- # jq . 00:24:04.538 05:26:20 -- nvmf/common.sh@545 -- # IFS=, 00:24:04.538 05:26:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme1", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme2", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme3", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme4", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme5", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme6", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme7", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme8", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme9", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 },{ 00:24:04.538 "params": { 00:24:04.538 "name": "Nvme10", 00:24:04.538 "trtype": "rdma", 00:24:04.538 "traddr": "192.168.100.8", 00:24:04.538 "adrfam": "ipv4", 00:24:04.538 "trsvcid": "4420", 00:24:04.538 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:04.538 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:04.538 "hdgst": false, 00:24:04.538 "ddgst": false 00:24:04.538 }, 00:24:04.538 "method": "bdev_nvme_attach_controller" 00:24:04.538 }' 00:24:04.538 [2024-11-19 05:26:20.902683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.538 [2024-11-19 05:26:20.938994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.472 Running I/O for 10 seconds... 00:24:06.037 05:26:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.037 05:26:22 -- common/autotest_common.sh@862 -- # return 0 00:24:06.037 05:26:22 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:06.037 05:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.037 05:26:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.037 05:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.037 05:26:22 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.037 05:26:22 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:06.037 05:26:22 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:06.037 05:26:22 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:06.037 05:26:22 -- target/shutdown.sh@57 -- # local ret=1 00:24:06.037 05:26:22 -- target/shutdown.sh@58 -- # local i 00:24:06.037 05:26:22 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:06.037 05:26:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:06.037 05:26:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:06.037 05:26:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.037 05:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.037 05:26:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 05:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.296 05:26:22 -- target/shutdown.sh@60 -- # read_io_count=504 00:24:06.296 05:26:22 -- target/shutdown.sh@63 -- # '[' 504 -ge 100 ']' 00:24:06.296 05:26:22 -- target/shutdown.sh@64 -- # ret=0 00:24:06.296 05:26:22 -- target/shutdown.sh@65 -- # break 00:24:06.296 05:26:22 -- target/shutdown.sh@69 -- # return 0 00:24:06.296 05:26:22 -- target/shutdown.sh@134 -- # killprocess 1897634 00:24:06.296 05:26:22 -- common/autotest_common.sh@936 -- # '[' -z 1897634 ']' 00:24:06.296 05:26:22 -- common/autotest_common.sh@940 -- # kill -0 1897634 00:24:06.296 05:26:22 -- common/autotest_common.sh@941 -- # uname 00:24:06.296 05:26:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:06.296 05:26:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1897634 00:24:06.296 05:26:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:06.296 05:26:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:06.296 05:26:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1897634' 00:24:06.296 killing process with pid 1897634 00:24:06.296 05:26:22 -- common/autotest_common.sh@955 -- # kill 1897634 00:24:06.296 05:26:22 -- common/autotest_common.sh@960 -- # wait 1897634 00:24:06.863 05:26:23 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:06.863 05:26:23 -- target/shutdown.sh@138 -- # sleep 1 00:24:07.437 [2024-11-19 05:26:23.752058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.752092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5b44630 sqhd:0000 p:0 m:0 dnr:0 00:24:07.437 [2024-11-19 05:26:23.752104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.752114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5b44630 sqhd:0000 p:0 m:0 dnr:0 00:24:07.437 [2024-11-19 05:26:23.752124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.752133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5b44630 sqhd:0000 p:0 m:0 dnr:0 00:24:07.437 [2024-11-19 05:26:23.752142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.752151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5b44630 sqhd:0000 p:0 m:0 dnr:0 00:24:07.437 [2024-11-19 05:26:23.754359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.437 [2024-11-19 05:26:23.754378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.437 [2024-11-19 05:26:23.754404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.754415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.754429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.754439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.754449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.754458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.754481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.754490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.756503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.437 [2024-11-19 05:26:23.756517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:07.437 [2024-11-19 05:26:23.756542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.756553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.756563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.756572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.756582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.756601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.756611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.758931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.437 [2024-11-19 05:26:23.758945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:07.437 [2024-11-19 05:26:23.758961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.758971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.758981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.758991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.759001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.759010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.759020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.759032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.760633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.437 [2024-11-19 05:26:23.760646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:07.437 [2024-11-19 05:26:23.760660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.760687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.760698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.760707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.760717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.760726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.760736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.760746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.762874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.437 [2024-11-19 05:26:23.762887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:07.437 [2024-11-19 05:26:23.762903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.762913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.762923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.762933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.762943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.762952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.437 [2024-11-19 05:26:23.762962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.437 [2024-11-19 05:26:23.762972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.765052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.438 [2024-11-19 05:26:23.765068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:07.438 [2024-11-19 05:26:23.765089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.765102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.765116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.765132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.765145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.765157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.765170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.767293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.438 [2024-11-19 05:26:23.767310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:07.438 [2024-11-19 05:26:23.767330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.767343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.767357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.767369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.767382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.767395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.767408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.767420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.769761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.438 [2024-11-19 05:26:23.769777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:07.438 [2024-11-19 05:26:23.769796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.769809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.769823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.769835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.769848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.769860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.769873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.769886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.771985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.438 [2024-11-19 05:26:23.772025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:07.438 [2024-11-19 05:26:23.772073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.772106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.772139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.772169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.772202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.772232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.772264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.438 [2024-11-19 05:26:23.772295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38604 cdw0:5b44630 sqhd:f300 p:1 m:1 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.438 [2024-11-19 05:26:23.775171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:07.438 [2024-11-19 05:26:23.775194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x181400 00:24:07.438 [2024-11-19 05:26:23.775208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x181d00 00:24:07.438 [2024-11-19 05:26:23.775257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x181d00 00:24:07.438 [2024-11-19 05:26:23.775289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x180c00 00:24:07.438 [2024-11-19 05:26:23.775319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003edf040 len:0x10000 key:0x183500 00:24:07.438 [2024-11-19 05:26:23.775350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x181400 00:24:07.438 [2024-11-19 05:26:23.775380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001920fcc0 len:0x10000 key:0x182900 00:24:07.438 [2024-11-19 05:26:23.775415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e8edc0 len:0x10000 key:0x183500 00:24:07.438 [2024-11-19 05:26:23.775446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x181d00 00:24:07.438 [2024-11-19 05:26:23.775476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x181400 00:24:07.438 [2024-11-19 05:26:23.775507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e6ecc0 len:0x10000 key:0x183500 00:24:07.438 [2024-11-19 05:26:23.775542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003ebef40 len:0x10000 key:0x183500 00:24:07.438 [2024-11-19 05:26:23.775573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x180c00 00:24:07.438 [2024-11-19 05:26:23.775602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x180c00 00:24:07.438 [2024-11-19 05:26:23.775632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x180c00 00:24:07.438 [2024-11-19 05:26:23.775663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x181d00 00:24:07.438 [2024-11-19 05:26:23.775693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x183900 00:24:07.438 [2024-11-19 05:26:23.775724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.438 [2024-11-19 05:26:23.775742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x181400 00:24:07.439 [2024-11-19 05:26:23.775757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x180c00 00:24:07.439 [2024-11-19 05:26:23.775787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x181400 00:24:07.439 [2024-11-19 05:26:23.775817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019260e00 len:0x10000 key:0x182900 00:24:07.439 [2024-11-19 05:26:23.775847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x183900 00:24:07.439 [2024-11-19 05:26:23.775877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003eef0c0 len:0x10000 key:0x183500 00:24:07.439 [2024-11-19 05:26:23.775907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x181400 00:24:07.439 [2024-11-19 05:26:23.775937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x181d00 00:24:07.439 [2024-11-19 05:26:23.775967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.775985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x180c00 00:24:07.439 [2024-11-19 05:26:23.775997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003eaeec0 len:0x10000 key:0x183500 00:24:07.439 [2024-11-19 05:26:23.776027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x180c00 00:24:07.439 [2024-11-19 05:26:23.776058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x180c00 00:24:07.439 [2024-11-19 05:26:23.776090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x183900 00:24:07.439 [2024-11-19 05:26:23.776120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x183900 00:24:07.439 [2024-11-19 05:26:23.776149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x181400 00:24:07.439 [2024-11-19 05:26:23.776179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x181400 00:24:07.439 [2024-11-19 05:26:23.776210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x181d00 00:24:07.439 [2024-11-19 05:26:23.776240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x181400 00:24:07.439 [2024-11-19 05:26:23.776271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x180c00 00:24:07.439 [2024-11-19 05:26:23.776301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e11e000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ade000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011aff000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011910000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011721000 len:0x10000 key:0x184300 00:24:07.439 [2024-11-19 05:26:23.776842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.439 [2024-11-19 05:26:23.776860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011742000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.776873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.776891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.776903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.776921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.776934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.776952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.776965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.776983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.776996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.777014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc82000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.777026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.777044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.777057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.777075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.777088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.777106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.777119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.777137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.777151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.777170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d07d000 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.777183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780041] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:24:07.440 [2024-11-19 05:26:23.780062] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:07.440 [2024-11-19 05:26:23.780082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002a7640 len:0x10000 key:0x183d00 00:24:07.440 [2024-11-19 05:26:23.780191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x183d00 00:24:07.440 [2024-11-19 05:26:23.780283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2afb80 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x183d00 00:24:07.440 [2024-11-19 05:26:23.780382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183d00 00:24:07.440 [2024-11-19 05:26:23.780412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e3eb40 len:0x10000 key:0x183500 00:24:07.440 [2024-11-19 05:26:23.780472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000715fb00 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183d00 00:24:07.440 [2024-11-19 05:26:23.780570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b23f800 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e4ebc0 len:0x10000 key:0x183500 00:24:07.440 [2024-11-19 05:26:23.780722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x184300 00:24:07.440 [2024-11-19 05:26:23.780752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000702f180 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ef780 len:0x10000 key:0x183c00 00:24:07.440 [2024-11-19 05:26:23.780813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.440 [2024-11-19 05:26:23.780830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.780843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x183d00 00:24:07.441 [2024-11-19 05:26:23.780873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.780890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.780903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.780920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183d00 00:24:07.441 [2024-11-19 05:26:23.780933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.780950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183c00 00:24:07.441 [2024-11-19 05:26:23.780963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.780980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x183c00 00:24:07.441 [2024-11-19 05:26:23.780993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x183c00 00:24:07.441 [2024-11-19 05:26:23.781023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2bfc00 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x183c00 00:24:07.441 [2024-11-19 05:26:23.781085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x183c00 00:24:07.441 [2024-11-19 05:26:23.781115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f15c000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f17d000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000119f7000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a18000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001250d000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001252e000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001254f000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ec0000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ee1000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3d7000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112bf000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001129e000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126fc000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126db000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13b000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a7b000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a9c000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011abd000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6a000 len:0x10000 key:0x184300 00:24:07.441 [2024-11-19 05:26:23.781813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.441 [2024-11-19 05:26:23.781833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bef5000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.781846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.781864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bed4000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.781876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.781894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000beb3000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.781906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.781924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be92000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.781937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.781955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.781967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.781985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be50000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.781998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.782016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.782028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.782047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ae000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.782060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.782077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d28d000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.782090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.782108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x184300 00:24:07.442 [2024-11-19 05:26:23.782121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785434] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:24:07.442 [2024-11-19 05:26:23.785455] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:07.442 [2024-11-19 05:26:23.785475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:24:07.442 [2024-11-19 05:26:23.785626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x184000 00:24:07.442 [2024-11-19 05:26:23.785657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef180 len:0x10000 key:0x184000 00:24:07.442 [2024-11-19 05:26:23.785687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182a00 00:24:07.442 [2024-11-19 05:26:23.785717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000085ed00 len:0x10000 key:0x184000 00:24:07.442 [2024-11-19 05:26:23.785778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182a00 00:24:07.442 [2024-11-19 05:26:23.785808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:24:07.442 [2024-11-19 05:26:23.785841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058fb00 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.785962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.785979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182a00 00:24:07.442 [2024-11-19 05:26:23.785992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x184000 00:24:07.442 [2024-11-19 05:26:23.786022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.786052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x184000 00:24:07.442 [2024-11-19 05:26:23.786082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x184000 00:24:07.442 [2024-11-19 05:26:23.786113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:24:07.442 [2024-11-19 05:26:23.786143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:24:07.442 [2024-11-19 05:26:23.786172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.786205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x183200 00:24:07.442 [2024-11-19 05:26:23.786234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.442 [2024-11-19 05:26:23.786251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183200 00:24:07.443 [2024-11-19 05:26:23.786264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:24:07.443 [2024-11-19 05:26:23.786294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183200 00:24:07.443 [2024-11-19 05:26:23.786324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x183200 00:24:07.443 [2024-11-19 05:26:23.786355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:24:07.443 [2024-11-19 05:26:23.786385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182a00 00:24:07.443 [2024-11-19 05:26:23.786415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000086ed80 len:0x10000 key:0x184000 00:24:07.443 [2024-11-19 05:26:23.786445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x184000 00:24:07.443 [2024-11-19 05:26:23.786475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x182a00 00:24:07.443 [2024-11-19 05:26:23.786505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x184000 00:24:07.443 [2024-11-19 05:26:23.786543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x183200 00:24:07.443 [2024-11-19 05:26:23.786573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e95e000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e97f000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4c6000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4b6000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4d7000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4f8000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.786976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.786994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f9f000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f7e000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9c9000 len:0x10000 key:0x184300 00:24:07.443 [2024-11-19 05:26:23.787350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.443 [2024-11-19 05:26:23.787369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9a8000 len:0x10000 key:0x184300 00:24:07.444 [2024-11-19 05:26:23.787381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.787399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x184300 00:24:07.444 [2024-11-19 05:26:23.787412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.787430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c966000 len:0x10000 key:0x184300 00:24:07.444 [2024-11-19 05:26:23.787443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.790740] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:24:07.444 [2024-11-19 05:26:23.790760] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:07.444 [2024-11-19 05:26:23.790779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.790792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.790822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.790836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.790854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.790866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.790887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.790900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.790918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:24:07.444 [2024-11-19 05:26:23.790930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.790948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.790960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.790978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.790991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182d00 00:24:07.444 [2024-11-19 05:26:23.791020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194df780 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182d00 00:24:07.444 [2024-11-19 05:26:23.791175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:24:07.444 [2024-11-19 05:26:23.791214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:24:07.444 [2024-11-19 05:26:23.791276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x182b00 00:24:07.444 [2024-11-19 05:26:23.791307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fd00 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:24:07.444 [2024-11-19 05:26:23.791368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:24:07.444 [2024-11-19 05:26:23.791520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194cf700 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x182a00 00:24:07.444 [2024-11-19 05:26:23.791741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182c00 00:24:07.444 [2024-11-19 05:26:23.791771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.444 [2024-11-19 05:26:23.791789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x182c00 00:24:07.445 [2024-11-19 05:26:23.791801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.791820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x182c00 00:24:07.445 [2024-11-19 05:26:23.791833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.791850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182c00 00:24:07.445 [2024-11-19 05:26:23.791863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.791880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.791911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.791929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.791947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77b000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.791960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.791978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.791990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b739000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b718000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6d6000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012360000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001318e000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.792710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.792728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.798544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x184300 00:24:07.445 [2024-11-19 05:26:23.798580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.801295] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:24:07.445 [2024-11-19 05:26:23.801314] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:07.445 [2024-11-19 05:26:23.801333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182e00 00:24:07.445 [2024-11-19 05:26:23.801348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.801368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182e00 00:24:07.445 [2024-11-19 05:26:23.801381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.801399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:24:07.445 [2024-11-19 05:26:23.801412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.445 [2024-11-19 05:26:23.801429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.801472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.801506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a3f880 len:0x10000 key:0x182d00 00:24:07.446 [2024-11-19 05:26:23.801567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.801597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.801689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.801719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aafc00 len:0x10000 key:0x182d00 00:24:07.446 [2024-11-19 05:26:23.801840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a4f900 len:0x10000 key:0x182d00 00:24:07.446 [2024-11-19 05:26:23.801903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.801982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.801995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.802025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.802056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.802086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecf700 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.802116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:24:07.446 [2024-11-19 05:26:23.802146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x182d00 00:24:07.446 [2024-11-19 05:26:23.802176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x182d00 00:24:07.446 [2024-11-19 05:26:23.802206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.802239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182d00 00:24:07.446 [2024-11-19 05:26:23.802269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.802299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.802329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:24:07.446 [2024-11-19 05:26:23.802359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.802389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.446 [2024-11-19 05:26:23.802406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182e00 00:24:07.446 [2024-11-19 05:26:23.802419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:24:07.447 [2024-11-19 05:26:23.802448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:24:07.447 [2024-11-19 05:26:23.802479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019df0000 len:0x10000 key:0x182e00 00:24:07.447 [2024-11-19 05:26:23.802509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182f00 00:24:07.447 [2024-11-19 05:26:23.802546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:24:07.447 [2024-11-19 05:26:23.802579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182e00 00:24:07.447 [2024-11-19 05:26:23.802609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:24:07.447 [2024-11-19 05:26:23.802639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:24:07.447 [2024-11-19 05:26:23.802668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ada000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ad000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114f0000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011511000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011532000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.802977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.802995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba72000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba51000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba30000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001337d000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001335c000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.803302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132f9000 len:0x10000 key:0x184300 00:24:07.447 [2024-11-19 05:26:23.803315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.806332] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:24:07.447 [2024-11-19 05:26:23.806350] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:07.447 [2024-11-19 05:26:23.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:24:07.447 [2024-11-19 05:26:23.806382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.806402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183100 00:24:07.447 [2024-11-19 05:26:23.806415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.806432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183100 00:24:07.447 [2024-11-19 05:26:23.806445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.806462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:24:07.447 [2024-11-19 05:26:23.806475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.806492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183100 00:24:07.447 [2024-11-19 05:26:23.806505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.806523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183100 00:24:07.447 [2024-11-19 05:26:23.806540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.447 [2024-11-19 05:26:23.806558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.806570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:24:07.448 [2024-11-19 05:26:23.806634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:24:07.448 [2024-11-19 05:26:23.806696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:24:07.448 [2024-11-19 05:26:23.806726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x183f00 00:24:07.448 [2024-11-19 05:26:23.806820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.806975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.806993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a03f880 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.807036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x182f00 00:24:07.448 [2024-11-19 05:26:23.807066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.807126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a07fa80 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.807186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x183f00 00:24:07.448 [2024-11-19 05:26:23.807248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0bfc80 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.807278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a08fb00 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.807339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0cfd00 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.807399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x183f00 00:24:07.448 [2024-11-19 05:26:23.807458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a06fa00 len:0x10000 key:0x183000 00:24:07.448 [2024-11-19 05:26:23.807489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182f00 00:24:07.448 [2024-11-19 05:26:23.807557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x182f00 00:24:07.448 [2024-11-19 05:26:23.807617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x183100 00:24:07.448 [2024-11-19 05:26:23.807647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.448 [2024-11-19 05:26:23.807667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x184300 00:24:07.449 [2024-11-19 05:26:23.807931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6b6aa000 sqhd:5310 p:0 m:0 dnr:0 00:24:07.449 [2024-11-19 05:26:23.807950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f40000 len:0x10000 key:0x184300 00:24:07.706 05:26:24 -- target/shutdown.sh@141 -- # kill -9 1897956 00:24:07.706 05:26:24 -- target/shutdown.sh@143 -- # stoptarget 00:24:07.706 05:26:24 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:07.706 05:26:24 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:07.706 05:26:24 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:07.706 05:26:24 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:07.706 05:26:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:07.706 05:26:24 -- nvmf/common.sh@116 -- # sync 00:24:07.706 05:26:24 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:07.706 05:26:24 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:07.706 05:26:24 -- nvmf/common.sh@119 -- # set +e 00:24:07.706 05:26:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:07.706 05:26:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:07.706 rmmod nvme_rdma 00:24:07.706 rmmod nvme_fabrics 00:24:07.706 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 1897956 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:24:07.706 05:26:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:07.706 05:26:24 -- nvmf/common.sh@123 -- # set -e 00:24:07.706 05:26:24 -- nvmf/common.sh@124 -- # return 0 00:24:07.706 05:26:24 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:07.706 05:26:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:07.706 05:26:24 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:07.706 00:24:07.706 real 0m5.337s 00:24:07.706 user 0m18.132s 00:24:07.706 sys 0m1.305s 00:24:07.706 05:26:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:07.706 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:07.706 ************************************ 00:24:07.706 END TEST nvmf_shutdown_tc3 00:24:07.706 ************************************ 00:24:07.964 05:26:24 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:07.964 00:24:07.964 real 0m25.671s 00:24:07.964 user 1m14.891s 00:24:07.964 sys 0m9.265s 00:24:07.964 05:26:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:07.964 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:07.964 ************************************ 00:24:07.964 END TEST nvmf_shutdown 00:24:07.964 ************************************ 00:24:07.964 05:26:24 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:07.964 05:26:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.964 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:07.964 05:26:24 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:07.964 05:26:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.964 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:07.964 05:26:24 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:07.964 05:26:24 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:07.964 05:26:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:07.964 05:26:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:07.964 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:07.964 ************************************ 00:24:07.964 START TEST nvmf_multicontroller 00:24:07.964 ************************************ 00:24:07.964 05:26:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:07.964 * Looking for test storage... 00:24:07.964 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:07.964 05:26:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:07.964 05:26:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:07.964 05:26:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:07.964 05:26:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:07.964 05:26:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:07.964 05:26:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:07.964 05:26:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:08.221 05:26:24 -- scripts/common.sh@335 -- # IFS=.-: 00:24:08.221 05:26:24 -- scripts/common.sh@335 -- # read -ra ver1 00:24:08.221 05:26:24 -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.221 05:26:24 -- scripts/common.sh@336 -- # read -ra ver2 00:24:08.221 05:26:24 -- scripts/common.sh@337 -- # local 'op=<' 00:24:08.221 05:26:24 -- scripts/common.sh@339 -- # ver1_l=2 00:24:08.221 05:26:24 -- scripts/common.sh@340 -- # ver2_l=1 00:24:08.221 05:26:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:08.221 05:26:24 -- scripts/common.sh@343 -- # case "$op" in 00:24:08.221 05:26:24 -- scripts/common.sh@344 -- # : 1 00:24:08.221 05:26:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:08.221 05:26:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.221 05:26:24 -- scripts/common.sh@364 -- # decimal 1 00:24:08.221 05:26:24 -- scripts/common.sh@352 -- # local d=1 00:24:08.221 05:26:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.221 05:26:24 -- scripts/common.sh@354 -- # echo 1 00:24:08.221 05:26:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:08.221 05:26:24 -- scripts/common.sh@365 -- # decimal 2 00:24:08.221 05:26:24 -- scripts/common.sh@352 -- # local d=2 00:24:08.221 05:26:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.221 05:26:24 -- scripts/common.sh@354 -- # echo 2 00:24:08.221 05:26:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:08.221 05:26:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:08.221 05:26:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:08.221 05:26:24 -- scripts/common.sh@367 -- # return 0 00:24:08.221 05:26:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.221 05:26:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:08.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.221 --rc genhtml_branch_coverage=1 00:24:08.221 --rc genhtml_function_coverage=1 00:24:08.221 --rc genhtml_legend=1 00:24:08.221 --rc geninfo_all_blocks=1 00:24:08.221 --rc geninfo_unexecuted_blocks=1 00:24:08.222 00:24:08.222 ' 00:24:08.222 05:26:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.222 --rc genhtml_branch_coverage=1 00:24:08.222 --rc genhtml_function_coverage=1 00:24:08.222 --rc genhtml_legend=1 00:24:08.222 --rc geninfo_all_blocks=1 00:24:08.222 --rc geninfo_unexecuted_blocks=1 00:24:08.222 00:24:08.222 ' 00:24:08.222 05:26:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.222 --rc genhtml_branch_coverage=1 00:24:08.222 --rc genhtml_function_coverage=1 00:24:08.222 --rc genhtml_legend=1 00:24:08.222 --rc geninfo_all_blocks=1 00:24:08.222 --rc geninfo_unexecuted_blocks=1 00:24:08.222 00:24:08.222 ' 00:24:08.222 05:26:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.222 --rc genhtml_branch_coverage=1 00:24:08.222 --rc genhtml_function_coverage=1 00:24:08.222 --rc genhtml_legend=1 00:24:08.222 --rc geninfo_all_blocks=1 00:24:08.222 --rc geninfo_unexecuted_blocks=1 00:24:08.222 00:24:08.222 ' 00:24:08.222 05:26:24 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.222 05:26:24 -- nvmf/common.sh@7 -- # uname -s 00:24:08.222 05:26:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.222 05:26:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.222 05:26:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.222 05:26:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.222 05:26:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.222 05:26:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.222 05:26:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.222 05:26:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.222 05:26:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.222 05:26:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.222 05:26:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:08.222 05:26:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:08.222 05:26:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.222 05:26:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.222 05:26:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.222 05:26:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:08.222 05:26:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.222 05:26:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.222 05:26:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.222 05:26:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.222 05:26:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.222 05:26:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.222 05:26:24 -- paths/export.sh@5 -- # export PATH 00:24:08.222 05:26:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.222 05:26:24 -- nvmf/common.sh@46 -- # : 0 00:24:08.222 05:26:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.222 05:26:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.222 05:26:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.222 05:26:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.222 05:26:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.222 05:26:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.222 05:26:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.222 05:26:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.222 05:26:24 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.222 05:26:24 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.222 05:26:24 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:08.222 05:26:24 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:08.222 05:26:24 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.222 05:26:24 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:24:08.222 05:26:24 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:08.222 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:08.222 05:26:24 -- host/multicontroller.sh@20 -- # exit 0 00:24:08.222 00:24:08.222 real 0m0.202s 00:24:08.222 user 0m0.110s 00:24:08.222 sys 0m0.107s 00:24:08.222 05:26:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:08.222 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.222 ************************************ 00:24:08.222 END TEST nvmf_multicontroller 00:24:08.222 ************************************ 00:24:08.222 05:26:24 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:08.222 05:26:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:08.222 05:26:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:08.222 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.222 ************************************ 00:24:08.222 START TEST nvmf_aer 00:24:08.222 ************************************ 00:24:08.222 05:26:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:08.222 * Looking for test storage... 00:24:08.222 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:08.222 05:26:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:08.222 05:26:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:08.222 05:26:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:08.222 05:26:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:08.222 05:26:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:08.222 05:26:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:08.222 05:26:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:08.222 05:26:24 -- scripts/common.sh@335 -- # IFS=.-: 00:24:08.222 05:26:24 -- scripts/common.sh@335 -- # read -ra ver1 00:24:08.222 05:26:24 -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.222 05:26:24 -- scripts/common.sh@336 -- # read -ra ver2 00:24:08.222 05:26:24 -- scripts/common.sh@337 -- # local 'op=<' 00:24:08.222 05:26:24 -- scripts/common.sh@339 -- # ver1_l=2 00:24:08.222 05:26:24 -- scripts/common.sh@340 -- # ver2_l=1 00:24:08.222 05:26:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:08.222 05:26:24 -- scripts/common.sh@343 -- # case "$op" in 00:24:08.222 05:26:24 -- scripts/common.sh@344 -- # : 1 00:24:08.222 05:26:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:08.222 05:26:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.222 05:26:24 -- scripts/common.sh@364 -- # decimal 1 00:24:08.222 05:26:24 -- scripts/common.sh@352 -- # local d=1 00:24:08.222 05:26:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.222 05:26:24 -- scripts/common.sh@354 -- # echo 1 00:24:08.222 05:26:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:08.222 05:26:24 -- scripts/common.sh@365 -- # decimal 2 00:24:08.481 05:26:24 -- scripts/common.sh@352 -- # local d=2 00:24:08.481 05:26:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.481 05:26:24 -- scripts/common.sh@354 -- # echo 2 00:24:08.481 05:26:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:08.481 05:26:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:08.481 05:26:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:08.481 05:26:24 -- scripts/common.sh@367 -- # return 0 00:24:08.481 05:26:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.481 05:26:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:08.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.481 --rc genhtml_branch_coverage=1 00:24:08.481 --rc genhtml_function_coverage=1 00:24:08.481 --rc genhtml_legend=1 00:24:08.481 --rc geninfo_all_blocks=1 00:24:08.481 --rc geninfo_unexecuted_blocks=1 00:24:08.481 00:24:08.481 ' 00:24:08.481 05:26:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:08.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.481 --rc genhtml_branch_coverage=1 00:24:08.481 --rc genhtml_function_coverage=1 00:24:08.481 --rc genhtml_legend=1 00:24:08.481 --rc geninfo_all_blocks=1 00:24:08.481 --rc geninfo_unexecuted_blocks=1 00:24:08.481 00:24:08.481 ' 00:24:08.481 05:26:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:08.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.481 --rc genhtml_branch_coverage=1 00:24:08.481 --rc genhtml_function_coverage=1 00:24:08.481 --rc genhtml_legend=1 00:24:08.481 --rc geninfo_all_blocks=1 00:24:08.481 --rc geninfo_unexecuted_blocks=1 00:24:08.481 00:24:08.481 ' 00:24:08.481 05:26:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:08.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.481 --rc genhtml_branch_coverage=1 00:24:08.481 --rc genhtml_function_coverage=1 00:24:08.481 --rc genhtml_legend=1 00:24:08.481 --rc geninfo_all_blocks=1 00:24:08.481 --rc geninfo_unexecuted_blocks=1 00:24:08.481 00:24:08.481 ' 00:24:08.481 05:26:24 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.481 05:26:24 -- nvmf/common.sh@7 -- # uname -s 00:24:08.481 05:26:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.481 05:26:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.481 05:26:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.481 05:26:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.481 05:26:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.481 05:26:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.481 05:26:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.481 05:26:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.481 05:26:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.481 05:26:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.481 05:26:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:08.481 05:26:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:08.481 05:26:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.481 05:26:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.481 05:26:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.481 05:26:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:08.481 05:26:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.481 05:26:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.481 05:26:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.481 05:26:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.481 05:26:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.481 05:26:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.481 05:26:24 -- paths/export.sh@5 -- # export PATH 00:24:08.481 05:26:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.481 05:26:24 -- nvmf/common.sh@46 -- # : 0 00:24:08.481 05:26:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.481 05:26:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.481 05:26:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.481 05:26:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.481 05:26:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.481 05:26:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.481 05:26:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.481 05:26:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.481 05:26:24 -- host/aer.sh@11 -- # nvmftestinit 00:24:08.481 05:26:24 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:08.481 05:26:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.481 05:26:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:08.481 05:26:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:08.481 05:26:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:08.481 05:26:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.481 05:26:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.481 05:26:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.481 05:26:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:08.481 05:26:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:08.481 05:26:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:08.481 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.596 05:26:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:16.596 05:26:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:16.596 05:26:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:16.596 05:26:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:16.596 05:26:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:16.596 05:26:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:16.596 05:26:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:16.596 05:26:31 -- nvmf/common.sh@294 -- # net_devs=() 00:24:16.596 05:26:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:16.596 05:26:31 -- nvmf/common.sh@295 -- # e810=() 00:24:16.596 05:26:31 -- nvmf/common.sh@295 -- # local -ga e810 00:24:16.596 05:26:31 -- nvmf/common.sh@296 -- # x722=() 00:24:16.596 05:26:31 -- nvmf/common.sh@296 -- # local -ga x722 00:24:16.596 05:26:31 -- nvmf/common.sh@297 -- # mlx=() 00:24:16.596 05:26:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:16.596 05:26:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.596 05:26:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:16.596 05:26:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:16.596 05:26:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:16.596 05:26:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:16.596 05:26:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:16.596 05:26:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:16.596 05:26:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:16.596 05:26:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.596 05:26:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:16.596 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:16.596 05:26:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:16.596 05:26:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:16.596 05:26:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:16.596 05:26:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:16.596 05:26:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:16.596 05:26:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:16.597 05:26:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:16.597 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:16.597 05:26:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:16.597 05:26:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:16.597 05:26:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.597 05:26:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.597 05:26:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.597 05:26:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:16.597 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.597 05:26:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.597 05:26:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.597 05:26:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.597 05:26:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:16.597 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.597 05:26:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:16.597 05:26:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:16.597 05:26:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:16.597 05:26:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:16.597 05:26:31 -- nvmf/common.sh@57 -- # uname 00:24:16.597 05:26:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:16.597 05:26:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:16.597 05:26:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:16.597 05:26:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:16.597 05:26:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:16.597 05:26:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:16.597 05:26:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:16.597 05:26:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:16.597 05:26:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:16.597 05:26:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:16.597 05:26:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:16.597 05:26:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:16.597 05:26:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:16.597 05:26:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:16.597 05:26:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:16.597 05:26:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:16.597 05:26:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@104 -- # continue 2 00:24:16.597 05:26:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@104 -- # continue 2 00:24:16.597 05:26:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:16.597 05:26:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.597 05:26:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:16.597 05:26:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:16.597 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:16.597 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:16.597 altname enp217s0f0np0 00:24:16.597 altname ens818f0np0 00:24:16.597 inet 192.168.100.8/24 scope global mlx_0_0 00:24:16.597 valid_lft forever preferred_lft forever 00:24:16.597 05:26:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:16.597 05:26:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.597 05:26:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:16.597 05:26:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:16.597 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:16.597 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:16.597 altname enp217s0f1np1 00:24:16.597 altname ens818f1np1 00:24:16.597 inet 192.168.100.9/24 scope global mlx_0_1 00:24:16.597 valid_lft forever preferred_lft forever 00:24:16.597 05:26:31 -- nvmf/common.sh@410 -- # return 0 00:24:16.597 05:26:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:16.597 05:26:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:16.597 05:26:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:16.597 05:26:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:16.597 05:26:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:16.597 05:26:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:16.597 05:26:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:16.597 05:26:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:16.597 05:26:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:16.597 05:26:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@104 -- # continue 2 00:24:16.597 05:26:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.597 05:26:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:16.597 05:26:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@104 -- # continue 2 00:24:16.597 05:26:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:16.597 05:26:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.597 05:26:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:16.597 05:26:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.597 05:26:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.597 05:26:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:16.597 192.168.100.9' 00:24:16.597 05:26:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:16.597 192.168.100.9' 00:24:16.597 05:26:31 -- nvmf/common.sh@445 -- # head -n 1 00:24:16.597 05:26:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:16.597 05:26:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:16.597 192.168.100.9' 00:24:16.597 05:26:31 -- nvmf/common.sh@446 -- # head -n 1 00:24:16.597 05:26:31 -- nvmf/common.sh@446 -- # tail -n +2 00:24:16.597 05:26:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:16.597 05:26:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:16.597 05:26:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:16.597 05:26:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:16.597 05:26:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:16.597 05:26:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:16.597 05:26:31 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:16.597 05:26:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:16.597 05:26:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.597 05:26:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.597 05:26:31 -- nvmf/common.sh@469 -- # nvmfpid=1902061 00:24:16.597 05:26:31 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.597 05:26:31 -- nvmf/common.sh@470 -- # waitforlisten 1902061 00:24:16.597 05:26:31 -- common/autotest_common.sh@829 -- # '[' -z 1902061 ']' 00:24:16.597 05:26:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.597 05:26:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.597 05:26:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.597 05:26:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.597 05:26:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 [2024-11-19 05:26:31.991995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:16.598 [2024-11-19 05:26:31.992044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.598 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.598 [2024-11-19 05:26:32.064925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.598 [2024-11-19 05:26:32.102712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:16.598 [2024-11-19 05:26:32.102825] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.598 [2024-11-19 05:26:32.102840] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.598 [2024-11-19 05:26:32.102849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.598 [2024-11-19 05:26:32.102899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.598 [2024-11-19 05:26:32.102918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.598 [2024-11-19 05:26:32.103006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.598 [2024-11-19 05:26:32.103008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.598 05:26:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.598 05:26:32 -- common/autotest_common.sh@862 -- # return 0 00:24:16.598 05:26:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:16.598 05:26:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.598 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 05:26:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.598 05:26:32 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:16.598 05:26:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.598 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 [2024-11-19 05:26:32.878907] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdb1200/0xdb56f0) succeed. 00:24:16.598 [2024-11-19 05:26:32.888120] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdb27f0/0xdf6d90) succeed. 00:24:16.598 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.598 05:26:33 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:16.598 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.598 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 Malloc0 00:24:16.598 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.598 05:26:33 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:16.598 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.598 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.598 05:26:33 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.598 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.598 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.598 05:26:33 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:16.598 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.598 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 [2024-11-19 05:26:33.052438] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:16.598 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.598 05:26:33 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:16.598 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.598 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.598 [2024-11-19 05:26:33.060156] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:16.598 [ 00:24:16.598 { 00:24:16.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.598 "subtype": "Discovery", 00:24:16.598 "listen_addresses": [], 00:24:16.598 "allow_any_host": true, 00:24:16.598 "hosts": [] 00:24:16.598 }, 00:24:16.598 { 00:24:16.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.598 "subtype": "NVMe", 00:24:16.598 "listen_addresses": [ 00:24:16.598 { 00:24:16.598 "transport": "RDMA", 00:24:16.598 "trtype": "RDMA", 00:24:16.598 "adrfam": "IPv4", 00:24:16.598 "traddr": "192.168.100.8", 00:24:16.598 "trsvcid": "4420" 00:24:16.598 } 00:24:16.598 ], 00:24:16.598 "allow_any_host": true, 00:24:16.598 "hosts": [], 00:24:16.598 "serial_number": "SPDK00000000000001", 00:24:16.598 "model_number": "SPDK bdev Controller", 00:24:16.598 "max_namespaces": 2, 00:24:16.598 "min_cntlid": 1, 00:24:16.598 "max_cntlid": 65519, 00:24:16.598 "namespaces": [ 00:24:16.598 { 00:24:16.598 "nsid": 1, 00:24:16.598 "bdev_name": "Malloc0", 00:24:16.598 "name": "Malloc0", 00:24:16.598 "nguid": "E3D4E6FDCF944E60B94A6B920D1E5198", 00:24:16.598 "uuid": "e3d4e6fd-cf94-4e60-b94a-6b920d1e5198" 00:24:16.598 } 00:24:16.598 ] 00:24:16.598 } 00:24:16.598 ] 00:24:16.598 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.598 05:26:33 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:16.598 05:26:33 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:16.598 05:26:33 -- host/aer.sh@33 -- # aerpid=1902235 00:24:16.598 05:26:33 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:16.598 05:26:33 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:16.598 05:26:33 -- common/autotest_common.sh@1254 -- # local i=0 00:24:16.598 05:26:33 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.598 05:26:33 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:24:16.598 05:26:33 -- common/autotest_common.sh@1257 -- # i=1 00:24:16.598 05:26:33 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:16.598 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.856 05:26:33 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.856 05:26:33 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:24:16.856 05:26:33 -- common/autotest_common.sh@1257 -- # i=2 00:24:16.856 05:26:33 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:16.856 05:26:33 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.856 05:26:33 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.856 05:26:33 -- common/autotest_common.sh@1265 -- # return 0 00:24:16.856 05:26:33 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:16.856 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.856 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 Malloc1 00:24:16.856 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.856 05:26:33 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:16.856 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.856 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.856 05:26:33 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:16.856 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.856 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 [ 00:24:16.856 { 00:24:16.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.856 "subtype": "Discovery", 00:24:16.856 "listen_addresses": [], 00:24:16.856 "allow_any_host": true, 00:24:16.856 "hosts": [] 00:24:16.856 }, 00:24:16.856 { 00:24:16.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.856 "subtype": "NVMe", 00:24:16.856 "listen_addresses": [ 00:24:16.856 { 00:24:16.856 "transport": "RDMA", 00:24:16.856 "trtype": "RDMA", 00:24:16.856 "adrfam": "IPv4", 00:24:16.856 "traddr": "192.168.100.8", 00:24:16.856 "trsvcid": "4420" 00:24:16.856 } 00:24:16.856 ], 00:24:16.856 "allow_any_host": true, 00:24:16.856 "hosts": [], 00:24:16.856 "serial_number": "SPDK00000000000001", 00:24:16.856 "model_number": "SPDK bdev Controller", 00:24:16.856 "max_namespaces": 2, 00:24:16.856 "min_cntlid": 1, 00:24:16.856 "max_cntlid": 65519, 00:24:16.856 "namespaces": [ 00:24:16.856 { 00:24:16.856 "nsid": 1, 00:24:16.856 "bdev_name": "Malloc0", 00:24:16.856 "name": "Malloc0", 00:24:16.856 "nguid": "E3D4E6FDCF944E60B94A6B920D1E5198", 00:24:16.856 "uuid": "e3d4e6fd-cf94-4e60-b94a-6b920d1e5198" 00:24:16.856 }, 00:24:16.856 { 00:24:16.856 "nsid": 2, 00:24:16.856 "bdev_name": "Malloc1", 00:24:16.856 "name": "Malloc1", 00:24:16.856 "nguid": "C18B72AFE4E74519823137C438209B78", 00:24:16.856 "uuid": "c18b72af-e4e7-4519-8231-37c438209b78" 00:24:16.856 } 00:24:16.856 ] 00:24:16.856 } 00:24:16.856 ] 00:24:16.856 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.856 05:26:33 -- host/aer.sh@43 -- # wait 1902235 00:24:16.856 Asynchronous Event Request test 00:24:16.856 Attaching to 192.168.100.8 00:24:16.856 Attached to 192.168.100.8 00:24:16.856 Registering asynchronous event callbacks... 00:24:16.856 Starting namespace attribute notice tests for all controllers... 00:24:16.856 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:16.856 aer_cb - Changed Namespace 00:24:16.856 Cleaning up... 00:24:16.856 05:26:33 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:16.856 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.856 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.856 05:26:33 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:16.856 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.856 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.114 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.114 05:26:33 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.114 05:26:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.114 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.114 05:26:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.114 05:26:33 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:17.114 05:26:33 -- host/aer.sh@51 -- # nvmftestfini 00:24:17.114 05:26:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:17.114 05:26:33 -- nvmf/common.sh@116 -- # sync 00:24:17.114 05:26:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:17.114 05:26:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:17.114 05:26:33 -- nvmf/common.sh@119 -- # set +e 00:24:17.114 05:26:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:17.114 05:26:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:17.114 rmmod nvme_rdma 00:24:17.114 rmmod nvme_fabrics 00:24:17.114 05:26:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:17.114 05:26:33 -- nvmf/common.sh@123 -- # set -e 00:24:17.114 05:26:33 -- nvmf/common.sh@124 -- # return 0 00:24:17.114 05:26:33 -- nvmf/common.sh@477 -- # '[' -n 1902061 ']' 00:24:17.114 05:26:33 -- nvmf/common.sh@478 -- # killprocess 1902061 00:24:17.114 05:26:33 -- common/autotest_common.sh@936 -- # '[' -z 1902061 ']' 00:24:17.114 05:26:33 -- common/autotest_common.sh@940 -- # kill -0 1902061 00:24:17.114 05:26:33 -- common/autotest_common.sh@941 -- # uname 00:24:17.114 05:26:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.114 05:26:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1902061 00:24:17.114 05:26:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:17.114 05:26:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:17.114 05:26:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1902061' 00:24:17.114 killing process with pid 1902061 00:24:17.115 05:26:33 -- common/autotest_common.sh@955 -- # kill 1902061 00:24:17.115 [2024-11-19 05:26:33.571477] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:17.115 05:26:33 -- common/autotest_common.sh@960 -- # wait 1902061 00:24:17.372 05:26:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:17.372 05:26:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:17.372 00:24:17.372 real 0m9.202s 00:24:17.372 user 0m8.879s 00:24:17.372 sys 0m5.900s 00:24:17.372 05:26:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:17.372 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.372 ************************************ 00:24:17.372 END TEST nvmf_aer 00:24:17.372 ************************************ 00:24:17.372 05:26:33 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:17.372 05:26:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:17.372 05:26:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:17.372 05:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.372 ************************************ 00:24:17.372 START TEST nvmf_async_init 00:24:17.372 ************************************ 00:24:17.372 05:26:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:17.631 * Looking for test storage... 00:24:17.631 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:17.631 05:26:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:17.631 05:26:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:17.631 05:26:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:17.631 05:26:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:17.631 05:26:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:17.631 05:26:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:17.631 05:26:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:17.631 05:26:34 -- scripts/common.sh@335 -- # IFS=.-: 00:24:17.631 05:26:34 -- scripts/common.sh@335 -- # read -ra ver1 00:24:17.631 05:26:34 -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.631 05:26:34 -- scripts/common.sh@336 -- # read -ra ver2 00:24:17.631 05:26:34 -- scripts/common.sh@337 -- # local 'op=<' 00:24:17.631 05:26:34 -- scripts/common.sh@339 -- # ver1_l=2 00:24:17.631 05:26:34 -- scripts/common.sh@340 -- # ver2_l=1 00:24:17.631 05:26:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:17.631 05:26:34 -- scripts/common.sh@343 -- # case "$op" in 00:24:17.631 05:26:34 -- scripts/common.sh@344 -- # : 1 00:24:17.631 05:26:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:17.631 05:26:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.631 05:26:34 -- scripts/common.sh@364 -- # decimal 1 00:24:17.631 05:26:34 -- scripts/common.sh@352 -- # local d=1 00:24:17.631 05:26:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.631 05:26:34 -- scripts/common.sh@354 -- # echo 1 00:24:17.631 05:26:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:17.631 05:26:34 -- scripts/common.sh@365 -- # decimal 2 00:24:17.631 05:26:34 -- scripts/common.sh@352 -- # local d=2 00:24:17.631 05:26:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.631 05:26:34 -- scripts/common.sh@354 -- # echo 2 00:24:17.631 05:26:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:17.631 05:26:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:17.631 05:26:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:17.631 05:26:34 -- scripts/common.sh@367 -- # return 0 00:24:17.631 05:26:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.631 05:26:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.631 --rc genhtml_branch_coverage=1 00:24:17.631 --rc genhtml_function_coverage=1 00:24:17.631 --rc genhtml_legend=1 00:24:17.631 --rc geninfo_all_blocks=1 00:24:17.631 --rc geninfo_unexecuted_blocks=1 00:24:17.631 00:24:17.631 ' 00:24:17.631 05:26:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.631 --rc genhtml_branch_coverage=1 00:24:17.631 --rc genhtml_function_coverage=1 00:24:17.631 --rc genhtml_legend=1 00:24:17.631 --rc geninfo_all_blocks=1 00:24:17.631 --rc geninfo_unexecuted_blocks=1 00:24:17.631 00:24:17.631 ' 00:24:17.631 05:26:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.631 --rc genhtml_branch_coverage=1 00:24:17.631 --rc genhtml_function_coverage=1 00:24:17.631 --rc genhtml_legend=1 00:24:17.631 --rc geninfo_all_blocks=1 00:24:17.631 --rc geninfo_unexecuted_blocks=1 00:24:17.631 00:24:17.631 ' 00:24:17.631 05:26:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.631 --rc genhtml_branch_coverage=1 00:24:17.632 --rc genhtml_function_coverage=1 00:24:17.632 --rc genhtml_legend=1 00:24:17.632 --rc geninfo_all_blocks=1 00:24:17.632 --rc geninfo_unexecuted_blocks=1 00:24:17.632 00:24:17.632 ' 00:24:17.632 05:26:34 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.632 05:26:34 -- nvmf/common.sh@7 -- # uname -s 00:24:17.632 05:26:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.632 05:26:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.632 05:26:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.632 05:26:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.632 05:26:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.632 05:26:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.632 05:26:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.632 05:26:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.632 05:26:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.632 05:26:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.632 05:26:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:17.632 05:26:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:17.632 05:26:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.632 05:26:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.632 05:26:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.632 05:26:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:17.632 05:26:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.632 05:26:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.632 05:26:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.632 05:26:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.632 05:26:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.632 05:26:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.632 05:26:34 -- paths/export.sh@5 -- # export PATH 00:24:17.632 05:26:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.632 05:26:34 -- nvmf/common.sh@46 -- # : 0 00:24:17.632 05:26:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:17.632 05:26:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:17.632 05:26:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:17.632 05:26:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.632 05:26:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.632 05:26:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:17.632 05:26:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:17.632 05:26:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:17.632 05:26:34 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:17.632 05:26:34 -- host/async_init.sh@14 -- # null_block_size=512 00:24:17.632 05:26:34 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:17.632 05:26:34 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:17.632 05:26:34 -- host/async_init.sh@20 -- # uuidgen 00:24:17.632 05:26:34 -- host/async_init.sh@20 -- # tr -d - 00:24:17.632 05:26:34 -- host/async_init.sh@20 -- # nguid=07a9ff1fdf3845289edf0c0961cfeb22 00:24:17.632 05:26:34 -- host/async_init.sh@22 -- # nvmftestinit 00:24:17.632 05:26:34 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:17.632 05:26:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.632 05:26:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:17.632 05:26:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:17.632 05:26:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:17.632 05:26:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.632 05:26:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.632 05:26:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.632 05:26:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:17.632 05:26:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:17.632 05:26:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:17.632 05:26:34 -- common/autotest_common.sh@10 -- # set +x 00:24:24.265 05:26:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:24.265 05:26:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:24.265 05:26:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:24.265 05:26:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:24.265 05:26:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:24.265 05:26:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:24.265 05:26:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:24.265 05:26:40 -- nvmf/common.sh@294 -- # net_devs=() 00:24:24.265 05:26:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:24.265 05:26:40 -- nvmf/common.sh@295 -- # e810=() 00:24:24.265 05:26:40 -- nvmf/common.sh@295 -- # local -ga e810 00:24:24.265 05:26:40 -- nvmf/common.sh@296 -- # x722=() 00:24:24.265 05:26:40 -- nvmf/common.sh@296 -- # local -ga x722 00:24:24.265 05:26:40 -- nvmf/common.sh@297 -- # mlx=() 00:24:24.265 05:26:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:24.265 05:26:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.265 05:26:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:24.265 05:26:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:24.265 05:26:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:24.265 05:26:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:24.265 05:26:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:24.265 05:26:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:24.265 05:26:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:24.265 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:24.265 05:26:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:24.265 05:26:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:24.265 05:26:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:24.265 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:24.265 05:26:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:24.265 05:26:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:24.265 05:26:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:24.265 05:26:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.265 05:26:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:24.265 05:26:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.265 05:26:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:24.265 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:24.265 05:26:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.265 05:26:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:24.265 05:26:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.265 05:26:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:24.265 05:26:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.265 05:26:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:24.265 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:24.265 05:26:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.265 05:26:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:24.265 05:26:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:24.265 05:26:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:24.265 05:26:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:24.265 05:26:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:24.265 05:26:40 -- nvmf/common.sh@57 -- # uname 00:24:24.265 05:26:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:24.265 05:26:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:24.265 05:26:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:24.265 05:26:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:24.265 05:26:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:24.265 05:26:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:24.265 05:26:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:24.265 05:26:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:24.265 05:26:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:24.265 05:26:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:24.265 05:26:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:24.265 05:26:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:24.265 05:26:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:24.266 05:26:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:24.266 05:26:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:24.266 05:26:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:24.266 05:26:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@104 -- # continue 2 00:24:24.266 05:26:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@104 -- # continue 2 00:24:24.266 05:26:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:24.266 05:26:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:24.266 05:26:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:24.266 05:26:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:24.266 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:24.266 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:24.266 altname enp217s0f0np0 00:24:24.266 altname ens818f0np0 00:24:24.266 inet 192.168.100.8/24 scope global mlx_0_0 00:24:24.266 valid_lft forever preferred_lft forever 00:24:24.266 05:26:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:24.266 05:26:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:24.266 05:26:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:24.266 05:26:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:24.266 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:24.266 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:24.266 altname enp217s0f1np1 00:24:24.266 altname ens818f1np1 00:24:24.266 inet 192.168.100.9/24 scope global mlx_0_1 00:24:24.266 valid_lft forever preferred_lft forever 00:24:24.266 05:26:40 -- nvmf/common.sh@410 -- # return 0 00:24:24.266 05:26:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:24.266 05:26:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:24.266 05:26:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:24.266 05:26:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:24.266 05:26:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:24.266 05:26:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:24.266 05:26:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:24.266 05:26:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:24.266 05:26:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:24.266 05:26:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@104 -- # continue 2 00:24:24.266 05:26:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.266 05:26:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:24.266 05:26:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@104 -- # continue 2 00:24:24.266 05:26:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:24.266 05:26:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:24.266 05:26:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:24.266 05:26:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:24.266 05:26:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:24.266 05:26:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:24.266 192.168.100.9' 00:24:24.266 05:26:40 -- nvmf/common.sh@445 -- # head -n 1 00:24:24.266 05:26:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:24.266 192.168.100.9' 00:24:24.266 05:26:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:24.266 05:26:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:24.266 192.168.100.9' 00:24:24.266 05:26:40 -- nvmf/common.sh@446 -- # tail -n +2 00:24:24.266 05:26:40 -- nvmf/common.sh@446 -- # head -n 1 00:24:24.266 05:26:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:24.266 05:26:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:24.266 05:26:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:24.266 05:26:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:24.266 05:26:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:24.266 05:26:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:24.266 05:26:40 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:24.266 05:26:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:24.266 05:26:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.266 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.266 05:26:40 -- nvmf/common.sh@469 -- # nvmfpid=1905545 00:24:24.266 05:26:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:24.266 05:26:40 -- nvmf/common.sh@470 -- # waitforlisten 1905545 00:24:24.266 05:26:40 -- common/autotest_common.sh@829 -- # '[' -z 1905545 ']' 00:24:24.266 05:26:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.266 05:26:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.266 05:26:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.266 05:26:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.266 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.266 [2024-11-19 05:26:40.691756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:24.266 [2024-11-19 05:26:40.691805] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.266 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.266 [2024-11-19 05:26:40.760416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.266 [2024-11-19 05:26:40.797715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:24.266 [2024-11-19 05:26:40.797825] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.266 [2024-11-19 05:26:40.797835] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.266 [2024-11-19 05:26:40.797844] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.266 [2024-11-19 05:26:40.797868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.200 05:26:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.200 05:26:41 -- common/autotest_common.sh@862 -- # return 0 00:24:25.200 05:26:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:25.200 05:26:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 05:26:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.200 05:26:41 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 [2024-11-19 05:26:41.585149] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1467080/0x146b570) succeed. 00:24:25.200 [2024-11-19 05:26:41.593951] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1468580/0x14acc10) succeed. 00:24:25.200 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.200 05:26:41 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 null0 00:24:25.200 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.200 05:26:41 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.200 05:26:41 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.200 05:26:41 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 07a9ff1fdf3845289edf0c0961cfeb22 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.200 05:26:41 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 [2024-11-19 05:26:41.680723] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:25.200 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.200 05:26:41 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 nvme0n1 00:24:25.200 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.200 05:26:41 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.200 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.200 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.458 [ 00:24:25.458 { 00:24:25.458 "name": "nvme0n1", 00:24:25.458 "aliases": [ 00:24:25.458 "07a9ff1f-df38-4528-9edf-0c0961cfeb22" 00:24:25.458 ], 00:24:25.458 "product_name": "NVMe disk", 00:24:25.458 "block_size": 512, 00:24:25.458 "num_blocks": 2097152, 00:24:25.458 "uuid": "07a9ff1f-df38-4528-9edf-0c0961cfeb22", 00:24:25.458 "assigned_rate_limits": { 00:24:25.458 "rw_ios_per_sec": 0, 00:24:25.458 "rw_mbytes_per_sec": 0, 00:24:25.458 "r_mbytes_per_sec": 0, 00:24:25.458 "w_mbytes_per_sec": 0 00:24:25.458 }, 00:24:25.458 "claimed": false, 00:24:25.458 "zoned": false, 00:24:25.458 "supported_io_types": { 00:24:25.458 "read": true, 00:24:25.458 "write": true, 00:24:25.458 "unmap": false, 00:24:25.458 "write_zeroes": true, 00:24:25.459 "flush": true, 00:24:25.459 "reset": true, 00:24:25.459 "compare": true, 00:24:25.459 "compare_and_write": true, 00:24:25.459 "abort": true, 00:24:25.459 "nvme_admin": true, 00:24:25.459 "nvme_io": true 00:24:25.459 }, 00:24:25.459 "memory_domains": [ 00:24:25.459 { 00:24:25.459 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:25.459 "dma_device_type": 0 00:24:25.459 } 00:24:25.459 ], 00:24:25.459 "driver_specific": { 00:24:25.459 "nvme": [ 00:24:25.459 { 00:24:25.459 "trid": { 00:24:25.459 "trtype": "RDMA", 00:24:25.459 "adrfam": "IPv4", 00:24:25.459 "traddr": "192.168.100.8", 00:24:25.459 "trsvcid": "4420", 00:24:25.459 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.459 }, 00:24:25.459 "ctrlr_data": { 00:24:25.459 "cntlid": 1, 00:24:25.459 "vendor_id": "0x8086", 00:24:25.459 "model_number": "SPDK bdev Controller", 00:24:25.459 "serial_number": "00000000000000000000", 00:24:25.459 "firmware_revision": "24.01.1", 00:24:25.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.459 "oacs": { 00:24:25.459 "security": 0, 00:24:25.459 "format": 0, 00:24:25.459 "firmware": 0, 00:24:25.459 "ns_manage": 0 00:24:25.459 }, 00:24:25.459 "multi_ctrlr": true, 00:24:25.459 "ana_reporting": false 00:24:25.459 }, 00:24:25.459 "vs": { 00:24:25.459 "nvme_version": "1.3" 00:24:25.459 }, 00:24:25.459 "ns_data": { 00:24:25.459 "id": 1, 00:24:25.459 "can_share": true 00:24:25.459 } 00:24:25.459 } 00:24:25.459 ], 00:24:25.459 "mp_policy": "active_passive" 00:24:25.459 } 00:24:25.459 } 00:24:25.459 ] 00:24:25.459 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:41 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:25.459 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 [2024-11-19 05:26:41.784040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:25.459 [2024-11-19 05:26:41.806659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:25.459 [2024-11-19 05:26:41.832017] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:25.459 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:41 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.459 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 [ 00:24:25.459 { 00:24:25.459 "name": "nvme0n1", 00:24:25.459 "aliases": [ 00:24:25.459 "07a9ff1f-df38-4528-9edf-0c0961cfeb22" 00:24:25.459 ], 00:24:25.459 "product_name": "NVMe disk", 00:24:25.459 "block_size": 512, 00:24:25.459 "num_blocks": 2097152, 00:24:25.459 "uuid": "07a9ff1f-df38-4528-9edf-0c0961cfeb22", 00:24:25.459 "assigned_rate_limits": { 00:24:25.459 "rw_ios_per_sec": 0, 00:24:25.459 "rw_mbytes_per_sec": 0, 00:24:25.459 "r_mbytes_per_sec": 0, 00:24:25.459 "w_mbytes_per_sec": 0 00:24:25.459 }, 00:24:25.459 "claimed": false, 00:24:25.459 "zoned": false, 00:24:25.459 "supported_io_types": { 00:24:25.459 "read": true, 00:24:25.459 "write": true, 00:24:25.459 "unmap": false, 00:24:25.459 "write_zeroes": true, 00:24:25.459 "flush": true, 00:24:25.459 "reset": true, 00:24:25.459 "compare": true, 00:24:25.459 "compare_and_write": true, 00:24:25.459 "abort": true, 00:24:25.459 "nvme_admin": true, 00:24:25.459 "nvme_io": true 00:24:25.459 }, 00:24:25.459 "memory_domains": [ 00:24:25.459 { 00:24:25.459 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:25.459 "dma_device_type": 0 00:24:25.459 } 00:24:25.459 ], 00:24:25.459 "driver_specific": { 00:24:25.459 "nvme": [ 00:24:25.459 { 00:24:25.459 "trid": { 00:24:25.459 "trtype": "RDMA", 00:24:25.459 "adrfam": "IPv4", 00:24:25.459 "traddr": "192.168.100.8", 00:24:25.459 "trsvcid": "4420", 00:24:25.459 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.459 }, 00:24:25.459 "ctrlr_data": { 00:24:25.459 "cntlid": 2, 00:24:25.459 "vendor_id": "0x8086", 00:24:25.459 "model_number": "SPDK bdev Controller", 00:24:25.459 "serial_number": "00000000000000000000", 00:24:25.459 "firmware_revision": "24.01.1", 00:24:25.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.459 "oacs": { 00:24:25.459 "security": 0, 00:24:25.459 "format": 0, 00:24:25.459 "firmware": 0, 00:24:25.459 "ns_manage": 0 00:24:25.459 }, 00:24:25.459 "multi_ctrlr": true, 00:24:25.459 "ana_reporting": false 00:24:25.459 }, 00:24:25.459 "vs": { 00:24:25.459 "nvme_version": "1.3" 00:24:25.459 }, 00:24:25.459 "ns_data": { 00:24:25.459 "id": 1, 00:24:25.459 "can_share": true 00:24:25.459 } 00:24:25.459 } 00:24:25.459 ], 00:24:25.459 "mp_policy": "active_passive" 00:24:25.459 } 00:24:25.459 } 00:24:25.459 ] 00:24:25.459 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:41 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.459 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:41 -- host/async_init.sh@53 -- # mktemp 00:24:25.459 05:26:41 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ic40R51355 00:24:25.459 05:26:41 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:25.459 05:26:41 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ic40R51355 00:24:25.459 05:26:41 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:41 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:25.459 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 [2024-11-19 05:26:41.916383] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:25.459 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:41 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ic40R51355 00:24:25.459 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 05:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:41 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ic40R51355 00:24:25.459 05:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:41 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 [2024-11-19 05:26:41.936419] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.459 nvme0n1 00:24:25.459 05:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.459 05:26:42 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.459 05:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.459 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.459 [ 00:24:25.459 { 00:24:25.459 "name": "nvme0n1", 00:24:25.459 "aliases": [ 00:24:25.459 "07a9ff1f-df38-4528-9edf-0c0961cfeb22" 00:24:25.459 ], 00:24:25.459 "product_name": "NVMe disk", 00:24:25.459 "block_size": 512, 00:24:25.459 "num_blocks": 2097152, 00:24:25.459 "uuid": "07a9ff1f-df38-4528-9edf-0c0961cfeb22", 00:24:25.718 "assigned_rate_limits": { 00:24:25.718 "rw_ios_per_sec": 0, 00:24:25.718 "rw_mbytes_per_sec": 0, 00:24:25.718 "r_mbytes_per_sec": 0, 00:24:25.718 "w_mbytes_per_sec": 0 00:24:25.718 }, 00:24:25.718 "claimed": false, 00:24:25.718 "zoned": false, 00:24:25.718 "supported_io_types": { 00:24:25.718 "read": true, 00:24:25.718 "write": true, 00:24:25.718 "unmap": false, 00:24:25.718 "write_zeroes": true, 00:24:25.718 "flush": true, 00:24:25.718 "reset": true, 00:24:25.718 "compare": true, 00:24:25.718 "compare_and_write": true, 00:24:25.718 "abort": true, 00:24:25.718 "nvme_admin": true, 00:24:25.718 "nvme_io": true 00:24:25.718 }, 00:24:25.718 "memory_domains": [ 00:24:25.718 { 00:24:25.718 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:25.718 "dma_device_type": 0 00:24:25.718 } 00:24:25.718 ], 00:24:25.718 "driver_specific": { 00:24:25.718 "nvme": [ 00:24:25.718 { 00:24:25.718 "trid": { 00:24:25.718 "trtype": "RDMA", 00:24:25.718 "adrfam": "IPv4", 00:24:25.718 "traddr": "192.168.100.8", 00:24:25.718 "trsvcid": "4421", 00:24:25.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.718 }, 00:24:25.718 "ctrlr_data": { 00:24:25.718 "cntlid": 3, 00:24:25.718 "vendor_id": "0x8086", 00:24:25.718 "model_number": "SPDK bdev Controller", 00:24:25.718 "serial_number": "00000000000000000000", 00:24:25.718 "firmware_revision": "24.01.1", 00:24:25.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.718 "oacs": { 00:24:25.718 "security": 0, 00:24:25.718 "format": 0, 00:24:25.718 "firmware": 0, 00:24:25.718 "ns_manage": 0 00:24:25.718 }, 00:24:25.718 "multi_ctrlr": true, 00:24:25.718 "ana_reporting": false 00:24:25.718 }, 00:24:25.718 "vs": { 00:24:25.718 "nvme_version": "1.3" 00:24:25.718 }, 00:24:25.718 "ns_data": { 00:24:25.718 "id": 1, 00:24:25.718 "can_share": true 00:24:25.718 } 00:24:25.718 } 00:24:25.718 ], 00:24:25.718 "mp_policy": "active_passive" 00:24:25.718 } 00:24:25.718 } 00:24:25.718 ] 00:24:25.718 05:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.718 05:26:42 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.718 05:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.718 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.718 05:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.718 05:26:42 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ic40R51355 00:24:25.718 05:26:42 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:25.718 05:26:42 -- host/async_init.sh@78 -- # nvmftestfini 00:24:25.718 05:26:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:25.718 05:26:42 -- nvmf/common.sh@116 -- # sync 00:24:25.718 05:26:42 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:25.718 05:26:42 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:25.718 05:26:42 -- nvmf/common.sh@119 -- # set +e 00:24:25.718 05:26:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:25.718 05:26:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:25.718 rmmod nvme_rdma 00:24:25.718 rmmod nvme_fabrics 00:24:25.718 05:26:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:25.718 05:26:42 -- nvmf/common.sh@123 -- # set -e 00:24:25.718 05:26:42 -- nvmf/common.sh@124 -- # return 0 00:24:25.718 05:26:42 -- nvmf/common.sh@477 -- # '[' -n 1905545 ']' 00:24:25.718 05:26:42 -- nvmf/common.sh@478 -- # killprocess 1905545 00:24:25.718 05:26:42 -- common/autotest_common.sh@936 -- # '[' -z 1905545 ']' 00:24:25.718 05:26:42 -- common/autotest_common.sh@940 -- # kill -0 1905545 00:24:25.718 05:26:42 -- common/autotest_common.sh@941 -- # uname 00:24:25.718 05:26:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:25.718 05:26:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1905545 00:24:25.718 05:26:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:25.718 05:26:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:25.718 05:26:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1905545' 00:24:25.718 killing process with pid 1905545 00:24:25.718 05:26:42 -- common/autotest_common.sh@955 -- # kill 1905545 00:24:25.718 05:26:42 -- common/autotest_common.sh@960 -- # wait 1905545 00:24:25.976 05:26:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:25.977 05:26:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:25.977 00:24:25.977 real 0m8.534s 00:24:25.977 user 0m3.895s 00:24:25.977 sys 0m5.412s 00:24:25.977 05:26:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:25.977 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.977 ************************************ 00:24:25.977 END TEST nvmf_async_init 00:24:25.977 ************************************ 00:24:25.977 05:26:42 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:25.977 05:26:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:25.977 05:26:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:25.977 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.977 ************************************ 00:24:25.977 START TEST dma 00:24:25.977 ************************************ 00:24:25.977 05:26:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:25.977 * Looking for test storage... 00:24:26.236 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:26.236 05:26:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:26.236 05:26:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:26.236 05:26:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:26.236 05:26:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:26.236 05:26:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:26.236 05:26:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:26.236 05:26:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:26.236 05:26:42 -- scripts/common.sh@335 -- # IFS=.-: 00:24:26.236 05:26:42 -- scripts/common.sh@335 -- # read -ra ver1 00:24:26.236 05:26:42 -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.236 05:26:42 -- scripts/common.sh@336 -- # read -ra ver2 00:24:26.236 05:26:42 -- scripts/common.sh@337 -- # local 'op=<' 00:24:26.236 05:26:42 -- scripts/common.sh@339 -- # ver1_l=2 00:24:26.236 05:26:42 -- scripts/common.sh@340 -- # ver2_l=1 00:24:26.236 05:26:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:26.236 05:26:42 -- scripts/common.sh@343 -- # case "$op" in 00:24:26.236 05:26:42 -- scripts/common.sh@344 -- # : 1 00:24:26.236 05:26:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:26.236 05:26:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.236 05:26:42 -- scripts/common.sh@364 -- # decimal 1 00:24:26.236 05:26:42 -- scripts/common.sh@352 -- # local d=1 00:24:26.236 05:26:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.236 05:26:42 -- scripts/common.sh@354 -- # echo 1 00:24:26.236 05:26:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:26.236 05:26:42 -- scripts/common.sh@365 -- # decimal 2 00:24:26.236 05:26:42 -- scripts/common.sh@352 -- # local d=2 00:24:26.236 05:26:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.236 05:26:42 -- scripts/common.sh@354 -- # echo 2 00:24:26.236 05:26:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:26.236 05:26:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:26.236 05:26:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:26.236 05:26:42 -- scripts/common.sh@367 -- # return 0 00:24:26.236 05:26:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.236 05:26:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.236 --rc genhtml_branch_coverage=1 00:24:26.236 --rc genhtml_function_coverage=1 00:24:26.236 --rc genhtml_legend=1 00:24:26.236 --rc geninfo_all_blocks=1 00:24:26.236 --rc geninfo_unexecuted_blocks=1 00:24:26.236 00:24:26.236 ' 00:24:26.236 05:26:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.236 --rc genhtml_branch_coverage=1 00:24:26.236 --rc genhtml_function_coverage=1 00:24:26.236 --rc genhtml_legend=1 00:24:26.236 --rc geninfo_all_blocks=1 00:24:26.236 --rc geninfo_unexecuted_blocks=1 00:24:26.236 00:24:26.236 ' 00:24:26.236 05:26:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.236 --rc genhtml_branch_coverage=1 00:24:26.236 --rc genhtml_function_coverage=1 00:24:26.236 --rc genhtml_legend=1 00:24:26.236 --rc geninfo_all_blocks=1 00:24:26.236 --rc geninfo_unexecuted_blocks=1 00:24:26.236 00:24:26.236 ' 00:24:26.236 05:26:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.236 --rc genhtml_branch_coverage=1 00:24:26.236 --rc genhtml_function_coverage=1 00:24:26.236 --rc genhtml_legend=1 00:24:26.236 --rc geninfo_all_blocks=1 00:24:26.236 --rc geninfo_unexecuted_blocks=1 00:24:26.236 00:24:26.236 ' 00:24:26.236 05:26:42 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.236 05:26:42 -- nvmf/common.sh@7 -- # uname -s 00:24:26.236 05:26:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.236 05:26:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.236 05:26:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.236 05:26:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.236 05:26:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.236 05:26:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.236 05:26:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.236 05:26:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.236 05:26:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.236 05:26:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.236 05:26:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:26.236 05:26:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:26.236 05:26:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.236 05:26:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.236 05:26:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.236 05:26:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:26.236 05:26:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.236 05:26:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.236 05:26:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.236 05:26:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.236 05:26:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.236 05:26:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.236 05:26:42 -- paths/export.sh@5 -- # export PATH 00:24:26.236 05:26:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.236 05:26:42 -- nvmf/common.sh@46 -- # : 0 00:24:26.236 05:26:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:26.236 05:26:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:26.236 05:26:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:26.236 05:26:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.236 05:26:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.236 05:26:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:26.236 05:26:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:26.236 05:26:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:26.236 05:26:42 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:26.236 05:26:42 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:26.236 05:26:42 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:26.236 05:26:42 -- host/dma.sh@18 -- # subsystem=0 00:24:26.236 05:26:42 -- host/dma.sh@93 -- # nvmftestinit 00:24:26.236 05:26:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:26.236 05:26:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.236 05:26:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:26.236 05:26:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:26.236 05:26:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:26.236 05:26:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.236 05:26:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.236 05:26:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.236 05:26:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:26.236 05:26:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:26.236 05:26:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:26.236 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:24:32.810 05:26:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:32.810 05:26:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:32.810 05:26:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:32.810 05:26:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:32.810 05:26:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:32.810 05:26:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:32.810 05:26:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:32.810 05:26:49 -- nvmf/common.sh@294 -- # net_devs=() 00:24:32.810 05:26:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:32.810 05:26:49 -- nvmf/common.sh@295 -- # e810=() 00:24:32.810 05:26:49 -- nvmf/common.sh@295 -- # local -ga e810 00:24:32.810 05:26:49 -- nvmf/common.sh@296 -- # x722=() 00:24:32.810 05:26:49 -- nvmf/common.sh@296 -- # local -ga x722 00:24:32.810 05:26:49 -- nvmf/common.sh@297 -- # mlx=() 00:24:32.810 05:26:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:32.810 05:26:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.810 05:26:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:32.810 05:26:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:32.810 05:26:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:32.810 05:26:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:32.810 05:26:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:32.810 05:26:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:32.810 05:26:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:32.810 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:32.810 05:26:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:32.810 05:26:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:32.810 05:26:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:32.810 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:32.810 05:26:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:32.810 05:26:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:32.810 05:26:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:32.810 05:26:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.810 05:26:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:32.810 05:26:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.810 05:26:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:32.810 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:32.810 05:26:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.810 05:26:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:32.810 05:26:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.810 05:26:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:32.810 05:26:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.810 05:26:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:32.810 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:32.810 05:26:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.810 05:26:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:32.810 05:26:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:32.810 05:26:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:32.810 05:26:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:32.810 05:26:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:32.810 05:26:49 -- nvmf/common.sh@57 -- # uname 00:24:32.810 05:26:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:32.810 05:26:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:32.810 05:26:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:32.810 05:26:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:32.810 05:26:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:32.810 05:26:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:32.810 05:26:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:32.810 05:26:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:32.810 05:26:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:32.810 05:26:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:32.810 05:26:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:32.810 05:26:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:32.810 05:26:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:32.810 05:26:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:32.810 05:26:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:32.810 05:26:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:32.810 05:26:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@104 -- # continue 2 00:24:32.811 05:26:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@104 -- # continue 2 00:24:32.811 05:26:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:32.811 05:26:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.811 05:26:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:32.811 05:26:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:32.811 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:32.811 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:32.811 altname enp217s0f0np0 00:24:32.811 altname ens818f0np0 00:24:32.811 inet 192.168.100.8/24 scope global mlx_0_0 00:24:32.811 valid_lft forever preferred_lft forever 00:24:32.811 05:26:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:32.811 05:26:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.811 05:26:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:32.811 05:26:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:32.811 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:32.811 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:32.811 altname enp217s0f1np1 00:24:32.811 altname ens818f1np1 00:24:32.811 inet 192.168.100.9/24 scope global mlx_0_1 00:24:32.811 valid_lft forever preferred_lft forever 00:24:32.811 05:26:49 -- nvmf/common.sh@410 -- # return 0 00:24:32.811 05:26:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:32.811 05:26:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:32.811 05:26:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:32.811 05:26:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:32.811 05:26:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:32.811 05:26:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:32.811 05:26:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:32.811 05:26:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:32.811 05:26:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:32.811 05:26:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@104 -- # continue 2 00:24:32.811 05:26:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.811 05:26:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:32.811 05:26:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@104 -- # continue 2 00:24:32.811 05:26:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:32.811 05:26:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.811 05:26:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:32.811 05:26:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.811 05:26:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.811 05:26:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:32.811 192.168.100.9' 00:24:32.811 05:26:49 -- nvmf/common.sh@445 -- # head -n 1 00:24:32.811 05:26:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:32.811 192.168.100.9' 00:24:32.811 05:26:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:32.811 05:26:49 -- nvmf/common.sh@446 -- # head -n 1 00:24:32.811 05:26:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:32.811 192.168.100.9' 00:24:32.811 05:26:49 -- nvmf/common.sh@446 -- # tail -n +2 00:24:32.811 05:26:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:32.811 05:26:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:32.811 05:26:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:32.811 05:26:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:32.811 05:26:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:32.811 05:26:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:32.811 05:26:49 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:32.811 05:26:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:32.811 05:26:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.811 05:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:32.811 05:26:49 -- nvmf/common.sh@469 -- # nvmfpid=1909268 00:24:32.811 05:26:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:32.811 05:26:49 -- nvmf/common.sh@470 -- # waitforlisten 1909268 00:24:32.811 05:26:49 -- common/autotest_common.sh@829 -- # '[' -z 1909268 ']' 00:24:32.811 05:26:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.811 05:26:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.811 05:26:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.811 05:26:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.811 05:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.073 [2024-11-19 05:26:49.388654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:33.073 [2024-11-19 05:26:49.388707] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.073 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.073 [2024-11-19 05:26:49.458042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:33.073 [2024-11-19 05:26:49.495610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:33.073 [2024-11-19 05:26:49.495719] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.073 [2024-11-19 05:26:49.495729] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.073 [2024-11-19 05:26:49.495737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.073 [2024-11-19 05:26:49.495783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.073 [2024-11-19 05:26:49.495785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.005 05:26:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.005 05:26:50 -- common/autotest_common.sh@862 -- # return 0 00:24:34.005 05:26:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:34.005 05:26:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.005 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.005 05:26:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.005 05:26:50 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:34.005 05:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.005 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.005 [2024-11-19 05:26:50.288683] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ad6bb0/0x1adb0a0) succeed. 00:24:34.005 [2024-11-19 05:26:50.297655] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ad80b0/0x1b1c740) succeed. 00:24:34.005 05:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.005 05:26:50 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:34.005 05:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.005 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.005 Malloc0 00:24:34.005 05:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.005 05:26:50 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:34.005 05:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.005 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.005 05:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.005 05:26:50 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:34.005 05:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.005 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.005 05:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.005 05:26:50 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:34.005 05:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.005 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:34.005 [2024-11-19 05:26:50.451089] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:34.005 05:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.005 05:26:50 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:34.005 05:26:50 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:34.006 05:26:50 -- nvmf/common.sh@520 -- # config=() 00:24:34.006 05:26:50 -- nvmf/common.sh@520 -- # local subsystem config 00:24:34.006 05:26:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:34.006 05:26:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:34.006 { 00:24:34.006 "params": { 00:24:34.006 "name": "Nvme$subsystem", 00:24:34.006 "trtype": "$TEST_TRANSPORT", 00:24:34.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.006 "adrfam": "ipv4", 00:24:34.006 "trsvcid": "$NVMF_PORT", 00:24:34.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.006 "hdgst": ${hdgst:-false}, 00:24:34.006 "ddgst": ${ddgst:-false} 00:24:34.006 }, 00:24:34.006 "method": "bdev_nvme_attach_controller" 00:24:34.006 } 00:24:34.006 EOF 00:24:34.006 )") 00:24:34.006 05:26:50 -- nvmf/common.sh@542 -- # cat 00:24:34.006 05:26:50 -- nvmf/common.sh@544 -- # jq . 00:24:34.006 05:26:50 -- nvmf/common.sh@545 -- # IFS=, 00:24:34.006 05:26:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:34.006 "params": { 00:24:34.006 "name": "Nvme0", 00:24:34.006 "trtype": "rdma", 00:24:34.006 "traddr": "192.168.100.8", 00:24:34.006 "adrfam": "ipv4", 00:24:34.006 "trsvcid": "4420", 00:24:34.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:34.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:34.006 "hdgst": false, 00:24:34.006 "ddgst": false 00:24:34.006 }, 00:24:34.006 "method": "bdev_nvme_attach_controller" 00:24:34.006 }' 00:24:34.006 [2024-11-19 05:26:50.497767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:34.006 [2024-11-19 05:26:50.497813] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909335 ] 00:24:34.006 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.006 [2024-11-19 05:26:50.565430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.263 [2024-11-19 05:26:50.602828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.263 [2024-11-19 05:26:50.602832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.519 bdev Nvme0n1 reports 1 memory domains 00:24:39.519 bdev Nvme0n1 supports RDMA memory domain 00:24:39.519 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:39.520 ========================================================================== 00:24:39.520 Latency [us] 00:24:39.520 IOPS MiB/s Average min max 00:24:39.520 Core 2: 21791.19 85.12 733.36 238.06 8430.42 00:24:39.520 Core 3: 21895.96 85.53 729.88 239.24 8491.47 00:24:39.520 ========================================================================== 00:24:39.520 Total : 43687.15 170.65 731.62 238.06 8491.47 00:24:39.520 00:24:39.520 Total operations: 218514, translate 218514 pull_push 0 memzero 0 00:24:39.520 05:26:55 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:39.520 05:26:55 -- host/dma.sh@107 -- # gen_malloc_json 00:24:39.520 05:26:55 -- host/dma.sh@21 -- # jq . 00:24:39.520 [2024-11-19 05:26:56.025109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:39.520 [2024-11-19 05:26:56.025163] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910372 ] 00:24:39.520 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.777 [2024-11-19 05:26:56.094296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:39.777 [2024-11-19 05:26:56.131021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.777 [2024-11-19 05:26:56.131022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.036 bdev Malloc0 reports 1 memory domains 00:24:45.036 bdev Malloc0 doesn't support RDMA memory domain 00:24:45.036 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:45.036 ========================================================================== 00:24:45.036 Latency [us] 00:24:45.036 IOPS MiB/s Average min max 00:24:45.036 Core 2: 14975.70 58.50 1067.69 369.51 1796.92 00:24:45.036 Core 3: 15297.18 59.75 1045.22 390.22 1768.74 00:24:45.036 ========================================================================== 00:24:45.036 Total : 30272.88 118.25 1056.33 369.51 1796.92 00:24:45.036 00:24:45.036 Total operations: 151418, translate 0 pull_push 605672 memzero 0 00:24:45.036 05:27:01 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:45.036 05:27:01 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:45.036 05:27:01 -- host/dma.sh@48 -- # local subsystem=0 00:24:45.036 05:27:01 -- host/dma.sh@50 -- # jq . 00:24:45.036 Ignoring -M option 00:24:45.036 [2024-11-19 05:27:01.459266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:45.036 [2024-11-19 05:27:01.459327] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911293 ] 00:24:45.036 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.036 [2024-11-19 05:27:01.527798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:45.036 [2024-11-19 05:27:01.562770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.036 [2024-11-19 05:27:01.562772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.294 [2024-11-19 05:27:01.763078] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:50.551 [2024-11-19 05:27:06.791595] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:50.551 bdev 931ab0bb-6c40-4469-b143-f86c8b101d5e reports 1 memory domains 00:24:50.551 bdev 931ab0bb-6c40-4469-b143-f86c8b101d5e supports RDMA memory domain 00:24:50.551 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:50.551 ========================================================================== 00:24:50.551 Latency [us] 00:24:50.551 IOPS MiB/s Average min max 00:24:50.551 Core 2: 74504.32 291.03 213.89 84.41 2968.97 00:24:50.551 Core 3: 70434.42 275.13 226.25 71.75 2869.64 00:24:50.551 ========================================================================== 00:24:50.551 Total : 144938.74 566.17 219.90 71.75 2968.97 00:24:50.551 00:24:50.551 Total operations: 724784, translate 0 pull_push 0 memzero 724784 00:24:50.551 05:27:06 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:50.551 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.551 [2024-11-19 05:27:07.093903] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:53.073 Initializing NVMe Controllers 00:24:53.073 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:24:53.073 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:53.073 Initialization complete. Launching workers. 00:24:53.073 ======================================================== 00:24:53.073 Latency(us) 00:24:53.073 Device Information : IOPS MiB/s Average min max 00:24:53.073 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2010.88 7.86 7956.39 5993.36 8953.34 00:24:53.073 ======================================================== 00:24:53.073 Total : 2010.88 7.86 7956.39 5993.36 8953.34 00:24:53.073 00:24:53.074 05:27:09 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:24:53.074 05:27:09 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:24:53.074 05:27:09 -- host/dma.sh@48 -- # local subsystem=0 00:24:53.074 05:27:09 -- host/dma.sh@50 -- # jq . 00:24:53.074 [2024-11-19 05:27:09.434414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:53.074 [2024-11-19 05:27:09.434470] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913087 ] 00:24:53.074 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.074 [2024-11-19 05:27:09.501536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:53.074 [2024-11-19 05:27:09.538485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.074 [2024-11-19 05:27:09.538486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.331 [2024-11-19 05:27:09.750139] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:58.583 [2024-11-19 05:27:14.778910] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:58.583 bdev f12c97c3-57f9-48b6-b669-7b01c0593701 reports 1 memory domains 00:24:58.584 bdev f12c97c3-57f9-48b6-b669-7b01c0593701 supports RDMA memory domain 00:24:58.584 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:58.584 ========================================================================== 00:24:58.584 Latency [us] 00:24:58.584 IOPS MiB/s Average min max 00:24:58.584 Core 2: 19146.88 74.79 834.96 15.21 9349.53 00:24:58.584 Core 3: 19527.22 76.28 818.68 23.21 9520.04 00:24:58.584 ========================================================================== 00:24:58.584 Total : 38674.10 151.07 826.74 15.21 9520.04 00:24:58.584 00:24:58.584 Total operations: 193404, translate 193299 pull_push 0 memzero 105 00:24:58.584 05:27:14 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:58.584 05:27:14 -- host/dma.sh@120 -- # nvmftestfini 00:24:58.584 05:27:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:58.584 05:27:14 -- nvmf/common.sh@116 -- # sync 00:24:58.584 05:27:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:58.584 05:27:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:58.584 05:27:14 -- nvmf/common.sh@119 -- # set +e 00:24:58.584 05:27:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:58.584 05:27:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:58.584 rmmod nvme_rdma 00:24:58.584 rmmod nvme_fabrics 00:24:58.584 05:27:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:58.584 05:27:15 -- nvmf/common.sh@123 -- # set -e 00:24:58.584 05:27:15 -- nvmf/common.sh@124 -- # return 0 00:24:58.584 05:27:15 -- nvmf/common.sh@477 -- # '[' -n 1909268 ']' 00:24:58.584 05:27:15 -- nvmf/common.sh@478 -- # killprocess 1909268 00:24:58.584 05:27:15 -- common/autotest_common.sh@936 -- # '[' -z 1909268 ']' 00:24:58.584 05:27:15 -- common/autotest_common.sh@940 -- # kill -0 1909268 00:24:58.584 05:27:15 -- common/autotest_common.sh@941 -- # uname 00:24:58.584 05:27:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:58.584 05:27:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1909268 00:24:58.584 05:27:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:58.584 05:27:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:58.584 05:27:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1909268' 00:24:58.584 killing process with pid 1909268 00:24:58.584 05:27:15 -- common/autotest_common.sh@955 -- # kill 1909268 00:24:58.584 05:27:15 -- common/autotest_common.sh@960 -- # wait 1909268 00:24:58.841 05:27:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:58.841 05:27:15 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:58.841 00:24:58.841 real 0m32.932s 00:24:58.841 user 1m36.202s 00:24:58.841 sys 0m6.335s 00:24:58.841 05:27:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:58.841 05:27:15 -- common/autotest_common.sh@10 -- # set +x 00:24:58.841 ************************************ 00:24:58.841 END TEST dma 00:24:58.841 ************************************ 00:24:59.099 05:27:15 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:59.099 05:27:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:59.099 05:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:59.099 05:27:15 -- common/autotest_common.sh@10 -- # set +x 00:24:59.099 ************************************ 00:24:59.099 START TEST nvmf_identify 00:24:59.099 ************************************ 00:24:59.099 05:27:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:59.099 * Looking for test storage... 00:24:59.099 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:59.099 05:27:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:59.099 05:27:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:59.099 05:27:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:59.099 05:27:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:59.099 05:27:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:59.099 05:27:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:59.099 05:27:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:59.099 05:27:15 -- scripts/common.sh@335 -- # IFS=.-: 00:24:59.099 05:27:15 -- scripts/common.sh@335 -- # read -ra ver1 00:24:59.099 05:27:15 -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.099 05:27:15 -- scripts/common.sh@336 -- # read -ra ver2 00:24:59.099 05:27:15 -- scripts/common.sh@337 -- # local 'op=<' 00:24:59.099 05:27:15 -- scripts/common.sh@339 -- # ver1_l=2 00:24:59.099 05:27:15 -- scripts/common.sh@340 -- # ver2_l=1 00:24:59.099 05:27:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:59.099 05:27:15 -- scripts/common.sh@343 -- # case "$op" in 00:24:59.099 05:27:15 -- scripts/common.sh@344 -- # : 1 00:24:59.099 05:27:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:59.099 05:27:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.099 05:27:15 -- scripts/common.sh@364 -- # decimal 1 00:24:59.099 05:27:15 -- scripts/common.sh@352 -- # local d=1 00:24:59.099 05:27:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.099 05:27:15 -- scripts/common.sh@354 -- # echo 1 00:24:59.099 05:27:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:59.099 05:27:15 -- scripts/common.sh@365 -- # decimal 2 00:24:59.099 05:27:15 -- scripts/common.sh@352 -- # local d=2 00:24:59.099 05:27:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.099 05:27:15 -- scripts/common.sh@354 -- # echo 2 00:24:59.099 05:27:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:59.099 05:27:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:59.099 05:27:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:59.099 05:27:15 -- scripts/common.sh@367 -- # return 0 00:24:59.099 05:27:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.099 05:27:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:59.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.099 --rc genhtml_branch_coverage=1 00:24:59.100 --rc genhtml_function_coverage=1 00:24:59.100 --rc genhtml_legend=1 00:24:59.100 --rc geninfo_all_blocks=1 00:24:59.100 --rc geninfo_unexecuted_blocks=1 00:24:59.100 00:24:59.100 ' 00:24:59.100 05:27:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:59.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.100 --rc genhtml_branch_coverage=1 00:24:59.100 --rc genhtml_function_coverage=1 00:24:59.100 --rc genhtml_legend=1 00:24:59.100 --rc geninfo_all_blocks=1 00:24:59.100 --rc geninfo_unexecuted_blocks=1 00:24:59.100 00:24:59.100 ' 00:24:59.100 05:27:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:59.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.100 --rc genhtml_branch_coverage=1 00:24:59.100 --rc genhtml_function_coverage=1 00:24:59.100 --rc genhtml_legend=1 00:24:59.100 --rc geninfo_all_blocks=1 00:24:59.100 --rc geninfo_unexecuted_blocks=1 00:24:59.100 00:24:59.100 ' 00:24:59.100 05:27:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:59.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.100 --rc genhtml_branch_coverage=1 00:24:59.100 --rc genhtml_function_coverage=1 00:24:59.100 --rc genhtml_legend=1 00:24:59.100 --rc geninfo_all_blocks=1 00:24:59.100 --rc geninfo_unexecuted_blocks=1 00:24:59.100 00:24:59.100 ' 00:24:59.100 05:27:15 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.100 05:27:15 -- nvmf/common.sh@7 -- # uname -s 00:24:59.100 05:27:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.100 05:27:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.100 05:27:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.100 05:27:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.100 05:27:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.100 05:27:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.100 05:27:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.100 05:27:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.100 05:27:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.100 05:27:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.100 05:27:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:59.100 05:27:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:59.100 05:27:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.100 05:27:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.100 05:27:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.100 05:27:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:59.100 05:27:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.100 05:27:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.100 05:27:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.100 05:27:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.100 05:27:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.100 05:27:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.100 05:27:15 -- paths/export.sh@5 -- # export PATH 00:24:59.100 05:27:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.100 05:27:15 -- nvmf/common.sh@46 -- # : 0 00:24:59.100 05:27:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:59.100 05:27:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:59.100 05:27:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:59.100 05:27:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.100 05:27:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.100 05:27:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:59.100 05:27:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:59.100 05:27:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:59.100 05:27:15 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.100 05:27:15 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.100 05:27:15 -- host/identify.sh@14 -- # nvmftestinit 00:24:59.100 05:27:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:59.100 05:27:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.100 05:27:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:59.100 05:27:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:59.100 05:27:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:59.100 05:27:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.100 05:27:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.100 05:27:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.100 05:27:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:59.100 05:27:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:59.100 05:27:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:59.359 05:27:15 -- common/autotest_common.sh@10 -- # set +x 00:25:05.915 05:27:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:05.915 05:27:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:05.915 05:27:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:05.915 05:27:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:05.915 05:27:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:05.915 05:27:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:05.915 05:27:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:05.915 05:27:22 -- nvmf/common.sh@294 -- # net_devs=() 00:25:05.915 05:27:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:05.915 05:27:22 -- nvmf/common.sh@295 -- # e810=() 00:25:05.915 05:27:22 -- nvmf/common.sh@295 -- # local -ga e810 00:25:05.915 05:27:22 -- nvmf/common.sh@296 -- # x722=() 00:25:05.915 05:27:22 -- nvmf/common.sh@296 -- # local -ga x722 00:25:05.915 05:27:22 -- nvmf/common.sh@297 -- # mlx=() 00:25:05.915 05:27:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:05.915 05:27:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.915 05:27:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:05.915 05:27:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:05.915 05:27:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:05.915 05:27:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:05.915 05:27:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:05.915 05:27:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.915 05:27:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:05.915 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:05.915 05:27:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.915 05:27:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.915 05:27:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:05.915 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:05.915 05:27:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.915 05:27:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:05.915 05:27:22 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:05.915 05:27:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.915 05:27:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.915 05:27:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.915 05:27:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.916 05:27:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:05.916 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.916 05:27:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.916 05:27:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.916 05:27:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.916 05:27:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:05.916 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.916 05:27:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:05.916 05:27:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:05.916 05:27:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:05.916 05:27:22 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:05.916 05:27:22 -- nvmf/common.sh@57 -- # uname 00:25:05.916 05:27:22 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:05.916 05:27:22 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:05.916 05:27:22 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:05.916 05:27:22 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:05.916 05:27:22 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:05.916 05:27:22 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:05.916 05:27:22 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:05.916 05:27:22 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:05.916 05:27:22 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:05.916 05:27:22 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:05.916 05:27:22 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:05.916 05:27:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.916 05:27:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:05.916 05:27:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:05.916 05:27:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.916 05:27:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:05.916 05:27:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@104 -- # continue 2 00:25:05.916 05:27:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@104 -- # continue 2 00:25:05.916 05:27:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:05.916 05:27:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:05.916 05:27:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:05.916 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.916 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:05.916 altname enp217s0f0np0 00:25:05.916 altname ens818f0np0 00:25:05.916 inet 192.168.100.8/24 scope global mlx_0_0 00:25:05.916 valid_lft forever preferred_lft forever 00:25:05.916 05:27:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:05.916 05:27:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.916 05:27:22 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:05.916 05:27:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:05.916 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.916 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:05.916 altname enp217s0f1np1 00:25:05.916 altname ens818f1np1 00:25:05.916 inet 192.168.100.9/24 scope global mlx_0_1 00:25:05.916 valid_lft forever preferred_lft forever 00:25:05.916 05:27:22 -- nvmf/common.sh@410 -- # return 0 00:25:05.916 05:27:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:05.916 05:27:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:05.916 05:27:22 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:05.916 05:27:22 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:05.916 05:27:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.916 05:27:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:05.916 05:27:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:05.916 05:27:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.916 05:27:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:05.916 05:27:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@104 -- # continue 2 00:25:05.916 05:27:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.916 05:27:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.916 05:27:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@104 -- # continue 2 00:25:05.916 05:27:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:05.916 05:27:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.916 05:27:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:05.916 05:27:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.916 05:27:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.916 05:27:22 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:05.916 192.168.100.9' 00:25:05.916 05:27:22 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:05.916 192.168.100.9' 00:25:05.916 05:27:22 -- nvmf/common.sh@445 -- # head -n 1 00:25:05.916 05:27:22 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:05.916 05:27:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:05.916 192.168.100.9' 00:25:05.916 05:27:22 -- nvmf/common.sh@446 -- # tail -n +2 00:25:05.916 05:27:22 -- nvmf/common.sh@446 -- # head -n 1 00:25:05.916 05:27:22 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:05.916 05:27:22 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:05.916 05:27:22 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:05.916 05:27:22 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:05.916 05:27:22 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:05.916 05:27:22 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:05.916 05:27:22 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:05.916 05:27:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:05.916 05:27:22 -- common/autotest_common.sh@10 -- # set +x 00:25:05.916 05:27:22 -- host/identify.sh@19 -- # nvmfpid=1917350 00:25:05.916 05:27:22 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:05.916 05:27:22 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.916 05:27:22 -- host/identify.sh@23 -- # waitforlisten 1917350 00:25:05.916 05:27:22 -- common/autotest_common.sh@829 -- # '[' -z 1917350 ']' 00:25:05.916 05:27:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.916 05:27:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.916 05:27:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.916 05:27:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.916 05:27:22 -- common/autotest_common.sh@10 -- # set +x 00:25:05.916 [2024-11-19 05:27:22.440681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:05.916 [2024-11-19 05:27:22.440734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.916 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.175 [2024-11-19 05:27:22.511097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.175 [2024-11-19 05:27:22.549758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:06.175 [2024-11-19 05:27:22.549886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.175 [2024-11-19 05:27:22.549896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.175 [2024-11-19 05:27:22.549905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.175 [2024-11-19 05:27:22.549954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.175 [2024-11-19 05:27:22.550055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.175 [2024-11-19 05:27:22.550126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.175 [2024-11-19 05:27:22.550128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.738 05:27:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.738 05:27:23 -- common/autotest_common.sh@862 -- # return 0 00:25:06.738 05:27:23 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:06.738 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.738 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.738 [2024-11-19 05:27:23.284707] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x179c200/0x17a06f0) succeed. 00:25:06.738 [2024-11-19 05:27:23.293819] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x179d7f0/0x17e1d90) succeed. 00:25:06.995 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.995 05:27:23 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:06.995 05:27:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:06.995 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.995 05:27:23 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:06.995 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.995 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.995 Malloc0 00:25:06.995 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.995 05:27:23 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.995 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.995 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.995 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.995 05:27:23 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:06.995 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.996 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.996 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.996 05:27:23 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:06.996 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.996 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.996 [2024-11-19 05:27:23.499073] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:06.996 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.996 05:27:23 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:06.996 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.996 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.996 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.996 05:27:23 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:06.996 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.996 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:06.996 [2024-11-19 05:27:23.514746] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:06.996 [ 00:25:06.996 { 00:25:06.996 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:06.996 "subtype": "Discovery", 00:25:06.996 "listen_addresses": [ 00:25:06.996 { 00:25:06.996 "transport": "RDMA", 00:25:06.996 "trtype": "RDMA", 00:25:06.996 "adrfam": "IPv4", 00:25:06.996 "traddr": "192.168.100.8", 00:25:06.996 "trsvcid": "4420" 00:25:06.996 } 00:25:06.996 ], 00:25:06.996 "allow_any_host": true, 00:25:06.996 "hosts": [] 00:25:06.996 }, 00:25:06.996 { 00:25:06.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.996 "subtype": "NVMe", 00:25:06.996 "listen_addresses": [ 00:25:06.996 { 00:25:06.996 "transport": "RDMA", 00:25:06.996 "trtype": "RDMA", 00:25:06.996 "adrfam": "IPv4", 00:25:06.996 "traddr": "192.168.100.8", 00:25:06.996 "trsvcid": "4420" 00:25:06.996 } 00:25:06.996 ], 00:25:06.996 "allow_any_host": true, 00:25:06.996 "hosts": [], 00:25:06.996 "serial_number": "SPDK00000000000001", 00:25:06.996 "model_number": "SPDK bdev Controller", 00:25:06.996 "max_namespaces": 32, 00:25:06.996 "min_cntlid": 1, 00:25:06.996 "max_cntlid": 65519, 00:25:06.996 "namespaces": [ 00:25:06.996 { 00:25:06.996 "nsid": 1, 00:25:06.996 "bdev_name": "Malloc0", 00:25:06.996 "name": "Malloc0", 00:25:06.996 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:06.996 "eui64": "ABCDEF0123456789", 00:25:06.996 "uuid": "2c368676-8536-4535-b246-2a208dce9c90" 00:25:06.996 } 00:25:06.996 ] 00:25:06.996 } 00:25:06.996 ] 00:25:06.996 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.996 05:27:23 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:06.996 [2024-11-19 05:27:23.554764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:06.996 [2024-11-19 05:27:23.554801] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917631 ] 00:25:07.261 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.261 [2024-11-19 05:27:23.601806] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:07.261 [2024-11-19 05:27:23.601875] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:07.261 [2024-11-19 05:27:23.601897] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:07.261 [2024-11-19 05:27:23.601902] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:07.261 [2024-11-19 05:27:23.601931] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:07.261 [2024-11-19 05:27:23.620049] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:07.261 [2024-11-19 05:27:23.634707] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:07.261 [2024-11-19 05:27:23.634721] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:07.261 [2024-11-19 05:27:23.634729] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634736] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634742] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634752] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634758] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634764] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634770] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634776] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634782] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634788] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634795] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634801] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634807] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634813] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634819] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634825] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634831] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634837] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634843] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634849] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634855] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634861] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634867] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634873] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634879] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634885] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634891] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634897] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634903] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634909] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634915] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634921] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:07.261 [2024-11-19 05:27:23.634926] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:07.261 [2024-11-19 05:27:23.634931] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:07.261 [2024-11-19 05:27:23.634952] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.634966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:07.261 [2024-11-19 05:27:23.640537] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.261 [2024-11-19 05:27:23.640547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:07.261 [2024-11-19 05:27:23.640554] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640562] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:07.261 [2024-11-19 05:27:23.640569] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:07.261 [2024-11-19 05:27:23.640576] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:07.261 [2024-11-19 05:27:23.640589] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.261 [2024-11-19 05:27:23.640618] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.261 [2024-11-19 05:27:23.640624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:07.261 [2024-11-19 05:27:23.640630] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:07.261 [2024-11-19 05:27:23.640636] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640643] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:07.261 [2024-11-19 05:27:23.640650] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.261 [2024-11-19 05:27:23.640675] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.261 [2024-11-19 05:27:23.640681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:07.261 [2024-11-19 05:27:23.640688] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:07.261 [2024-11-19 05:27:23.640693] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640700] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:07.261 [2024-11-19 05:27:23.640708] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.261 [2024-11-19 05:27:23.640735] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.261 [2024-11-19 05:27:23.640740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:07.261 [2024-11-19 05:27:23.640747] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:07.261 [2024-11-19 05:27:23.640752] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640761] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.261 [2024-11-19 05:27:23.640771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.261 [2024-11-19 05:27:23.640786] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.261 [2024-11-19 05:27:23.640792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:07.261 [2024-11-19 05:27:23.640798] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:07.261 [2024-11-19 05:27:23.640804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:07.261 [2024-11-19 05:27:23.640810] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.640816] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:07.262 [2024-11-19 05:27:23.640923] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:07.262 [2024-11-19 05:27:23.640928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:07.262 [2024-11-19 05:27:23.640937] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.640945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.262 [2024-11-19 05:27:23.640962] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.640968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.640974] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:07.262 [2024-11-19 05:27:23.640980] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.640988] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.640995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.262 [2024-11-19 05:27:23.641015] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641026] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:07.262 [2024-11-19 05:27:23.641032] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:07.262 [2024-11-19 05:27:23.641038] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641044] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:07.262 [2024-11-19 05:27:23.641057] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:07.262 [2024-11-19 05:27:23.641066] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:07.262 [2024-11-19 05:27:23.641110] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641124] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:07.262 [2024-11-19 05:27:23.641130] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:07.262 [2024-11-19 05:27:23.641135] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:07.262 [2024-11-19 05:27:23.641142] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:07.262 [2024-11-19 05:27:23.641147] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:07.262 [2024-11-19 05:27:23.641153] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:07.262 [2024-11-19 05:27:23.641159] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641168] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:07.262 [2024-11-19 05:27:23.641176] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.262 [2024-11-19 05:27:23.641205] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641219] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.262 [2024-11-19 05:27:23.641233] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.262 [2024-11-19 05:27:23.641246] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.262 [2024-11-19 05:27:23.641260] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.262 [2024-11-19 05:27:23.641272] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:07.262 [2024-11-19 05:27:23.641278] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641289] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:07.262 [2024-11-19 05:27:23.641296] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.262 [2024-11-19 05:27:23.641322] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641334] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:07.262 [2024-11-19 05:27:23.641340] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:07.262 [2024-11-19 05:27:23.641346] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641354] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:07.262 [2024-11-19 05:27:23.641389] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641402] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641411] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:07.262 [2024-11-19 05:27:23.641433] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x184100 00:25:07.262 [2024-11-19 05:27:23.641449] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.262 [2024-11-19 05:27:23.641474] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641490] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x184100 00:25:07.262 [2024-11-19 05:27:23.641504] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641510] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641521] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641527] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641546] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x184100 00:25:07.262 [2024-11-19 05:27:23.641560] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:07.262 [2024-11-19 05:27:23.641580] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.262 [2024-11-19 05:27:23.641586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:07.262 [2024-11-19 05:27:23.641597] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:07.262 ===================================================== 00:25:07.262 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:07.262 ===================================================== 00:25:07.262 Controller Capabilities/Features 00:25:07.262 ================================ 00:25:07.262 Vendor ID: 0000 00:25:07.262 Subsystem Vendor ID: 0000 00:25:07.262 Serial Number: .................... 00:25:07.262 Model Number: ........................................ 00:25:07.262 Firmware Version: 24.01.1 00:25:07.262 Recommended Arb Burst: 0 00:25:07.262 IEEE OUI Identifier: 00 00 00 00:25:07.263 Multi-path I/O 00:25:07.263 May have multiple subsystem ports: No 00:25:07.263 May have multiple controllers: No 00:25:07.263 Associated with SR-IOV VF: No 00:25:07.263 Max Data Transfer Size: 131072 00:25:07.263 Max Number of Namespaces: 0 00:25:07.263 Max Number of I/O Queues: 1024 00:25:07.263 NVMe Specification Version (VS): 1.3 00:25:07.263 NVMe Specification Version (Identify): 1.3 00:25:07.263 Maximum Queue Entries: 128 00:25:07.263 Contiguous Queues Required: Yes 00:25:07.263 Arbitration Mechanisms Supported 00:25:07.263 Weighted Round Robin: Not Supported 00:25:07.263 Vendor Specific: Not Supported 00:25:07.263 Reset Timeout: 15000 ms 00:25:07.263 Doorbell Stride: 4 bytes 00:25:07.263 NVM Subsystem Reset: Not Supported 00:25:07.263 Command Sets Supported 00:25:07.263 NVM Command Set: Supported 00:25:07.263 Boot Partition: Not Supported 00:25:07.263 Memory Page Size Minimum: 4096 bytes 00:25:07.263 Memory Page Size Maximum: 4096 bytes 00:25:07.263 Persistent Memory Region: Not Supported 00:25:07.263 Optional Asynchronous Events Supported 00:25:07.263 Namespace Attribute Notices: Not Supported 00:25:07.263 Firmware Activation Notices: Not Supported 00:25:07.263 ANA Change Notices: Not Supported 00:25:07.263 PLE Aggregate Log Change Notices: Not Supported 00:25:07.263 LBA Status Info Alert Notices: Not Supported 00:25:07.263 EGE Aggregate Log Change Notices: Not Supported 00:25:07.263 Normal NVM Subsystem Shutdown event: Not Supported 00:25:07.263 Zone Descriptor Change Notices: Not Supported 00:25:07.263 Discovery Log Change Notices: Supported 00:25:07.263 Controller Attributes 00:25:07.263 128-bit Host Identifier: Not Supported 00:25:07.263 Non-Operational Permissive Mode: Not Supported 00:25:07.263 NVM Sets: Not Supported 00:25:07.263 Read Recovery Levels: Not Supported 00:25:07.263 Endurance Groups: Not Supported 00:25:07.263 Predictable Latency Mode: Not Supported 00:25:07.263 Traffic Based Keep ALive: Not Supported 00:25:07.263 Namespace Granularity: Not Supported 00:25:07.263 SQ Associations: Not Supported 00:25:07.263 UUID List: Not Supported 00:25:07.263 Multi-Domain Subsystem: Not Supported 00:25:07.263 Fixed Capacity Management: Not Supported 00:25:07.263 Variable Capacity Management: Not Supported 00:25:07.263 Delete Endurance Group: Not Supported 00:25:07.263 Delete NVM Set: Not Supported 00:25:07.263 Extended LBA Formats Supported: Not Supported 00:25:07.263 Flexible Data Placement Supported: Not Supported 00:25:07.263 00:25:07.263 Controller Memory Buffer Support 00:25:07.263 ================================ 00:25:07.263 Supported: No 00:25:07.263 00:25:07.263 Persistent Memory Region Support 00:25:07.263 ================================ 00:25:07.263 Supported: No 00:25:07.263 00:25:07.263 Admin Command Set Attributes 00:25:07.263 ============================ 00:25:07.263 Security Send/Receive: Not Supported 00:25:07.263 Format NVM: Not Supported 00:25:07.263 Firmware Activate/Download: Not Supported 00:25:07.263 Namespace Management: Not Supported 00:25:07.263 Device Self-Test: Not Supported 00:25:07.263 Directives: Not Supported 00:25:07.263 NVMe-MI: Not Supported 00:25:07.263 Virtualization Management: Not Supported 00:25:07.263 Doorbell Buffer Config: Not Supported 00:25:07.263 Get LBA Status Capability: Not Supported 00:25:07.263 Command & Feature Lockdown Capability: Not Supported 00:25:07.263 Abort Command Limit: 1 00:25:07.263 Async Event Request Limit: 4 00:25:07.263 Number of Firmware Slots: N/A 00:25:07.263 Firmware Slot 1 Read-Only: N/A 00:25:07.263 Firmware Activation Without Reset: N/A 00:25:07.263 Multiple Update Detection Support: N/A 00:25:07.263 Firmware Update Granularity: No Information Provided 00:25:07.263 Per-Namespace SMART Log: No 00:25:07.263 Asymmetric Namespace Access Log Page: Not Supported 00:25:07.263 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:07.263 Command Effects Log Page: Not Supported 00:25:07.263 Get Log Page Extended Data: Supported 00:25:07.263 Telemetry Log Pages: Not Supported 00:25:07.263 Persistent Event Log Pages: Not Supported 00:25:07.263 Supported Log Pages Log Page: May Support 00:25:07.263 Commands Supported & Effects Log Page: Not Supported 00:25:07.263 Feature Identifiers & Effects Log Page:May Support 00:25:07.263 NVMe-MI Commands & Effects Log Page: May Support 00:25:07.263 Data Area 4 for Telemetry Log: Not Supported 00:25:07.263 Error Log Page Entries Supported: 128 00:25:07.263 Keep Alive: Not Supported 00:25:07.263 00:25:07.263 NVM Command Set Attributes 00:25:07.263 ========================== 00:25:07.263 Submission Queue Entry Size 00:25:07.263 Max: 1 00:25:07.263 Min: 1 00:25:07.263 Completion Queue Entry Size 00:25:07.263 Max: 1 00:25:07.263 Min: 1 00:25:07.263 Number of Namespaces: 0 00:25:07.263 Compare Command: Not Supported 00:25:07.263 Write Uncorrectable Command: Not Supported 00:25:07.263 Dataset Management Command: Not Supported 00:25:07.263 Write Zeroes Command: Not Supported 00:25:07.263 Set Features Save Field: Not Supported 00:25:07.263 Reservations: Not Supported 00:25:07.263 Timestamp: Not Supported 00:25:07.263 Copy: Not Supported 00:25:07.263 Volatile Write Cache: Not Present 00:25:07.263 Atomic Write Unit (Normal): 1 00:25:07.263 Atomic Write Unit (PFail): 1 00:25:07.263 Atomic Compare & Write Unit: 1 00:25:07.263 Fused Compare & Write: Supported 00:25:07.263 Scatter-Gather List 00:25:07.263 SGL Command Set: Supported 00:25:07.263 SGL Keyed: Supported 00:25:07.263 SGL Bit Bucket Descriptor: Not Supported 00:25:07.263 SGL Metadata Pointer: Not Supported 00:25:07.263 Oversized SGL: Not Supported 00:25:07.263 SGL Metadata Address: Not Supported 00:25:07.263 SGL Offset: Supported 00:25:07.263 Transport SGL Data Block: Not Supported 00:25:07.263 Replay Protected Memory Block: Not Supported 00:25:07.263 00:25:07.263 Firmware Slot Information 00:25:07.263 ========================= 00:25:07.263 Active slot: 0 00:25:07.263 00:25:07.263 00:25:07.263 Error Log 00:25:07.263 ========= 00:25:07.263 00:25:07.263 Active Namespaces 00:25:07.263 ================= 00:25:07.263 Discovery Log Page 00:25:07.263 ================== 00:25:07.263 Generation Counter: 2 00:25:07.263 Number of Records: 2 00:25:07.263 Record Format: 0 00:25:07.263 00:25:07.263 Discovery Log Entry 0 00:25:07.263 ---------------------- 00:25:07.263 Transport Type: 1 (RDMA) 00:25:07.263 Address Family: 1 (IPv4) 00:25:07.263 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:07.263 Entry Flags: 00:25:07.263 Duplicate Returned Information: 1 00:25:07.263 Explicit Persistent Connection Support for Discovery: 1 00:25:07.263 Transport Requirements: 00:25:07.263 Secure Channel: Not Required 00:25:07.263 Port ID: 0 (0x0000) 00:25:07.263 Controller ID: 65535 (0xffff) 00:25:07.263 Admin Max SQ Size: 128 00:25:07.263 Transport Service Identifier: 4420 00:25:07.263 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:07.263 Transport Address: 192.168.100.8 00:25:07.263 Transport Specific Address Subtype - RDMA 00:25:07.263 RDMA QP Service Type: 1 (Reliable Connected) 00:25:07.263 RDMA Provider Type: 1 (No provider specified) 00:25:07.263 RDMA CM Service: 1 (RDMA_CM) 00:25:07.263 Discovery Log Entry 1 00:25:07.263 ---------------------- 00:25:07.263 Transport Type: 1 (RDMA) 00:25:07.263 Address Family: 1 (IPv4) 00:25:07.263 Subsystem Type: 2 (NVM Subsystem) 00:25:07.263 Entry Flags: 00:25:07.263 Duplicate Returned Information: 0 00:25:07.263 Explicit Persistent Connection Support for Discovery: 0 00:25:07.263 Transport Requirements: 00:25:07.263 Secure Channel: Not Required 00:25:07.263 Port ID: 0 (0x0000) 00:25:07.263 Controller ID: 65535 (0xffff) 00:25:07.263 Admin Max SQ Size: [2024-11-19 05:27:23.641673] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:07.263 [2024-11-19 05:27:23.641683] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2468 doesn't match qid 00:25:07.263 [2024-11-19 05:27:23.641696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32598 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:07.263 [2024-11-19 05:27:23.641702] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2468 doesn't match qid 00:25:07.263 [2024-11-19 05:27:23.641710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32598 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:07.263 [2024-11-19 05:27:23.641716] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2468 doesn't match qid 00:25:07.263 [2024-11-19 05:27:23.641724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32598 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:07.263 [2024-11-19 05:27:23.641730] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2468 doesn't match qid 00:25:07.264 [2024-11-19 05:27:23.641738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32598 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.641746] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.641771] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.641777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.641785] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.641799] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641815] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.641821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.641828] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:07.264 [2024-11-19 05:27:23.641833] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:07.264 [2024-11-19 05:27:23.641839] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641848] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.641875] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.641881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.641887] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641896] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.641922] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.641927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.641934] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641942] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.641965] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.641971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.641978] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641986] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.641994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642012] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642024] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642062] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642074] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642083] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642111] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642123] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642131] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642156] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642168] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642178] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642201] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642214] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642222] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642247] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642259] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642268] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642295] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642307] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642315] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642336] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642348] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642357] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642380] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642392] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642400] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642426] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642437] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642447] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642469] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642480] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642489] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.264 [2024-11-19 05:27:23.642512] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.264 [2024-11-19 05:27:23.642518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:07.264 [2024-11-19 05:27:23.642524] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.264 [2024-11-19 05:27:23.642537] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642563] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642574] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642583] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642610] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642622] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642630] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642655] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642667] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642676] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642705] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642718] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642727] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642750] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642762] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642770] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642801] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642813] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642821] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642848] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642860] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642868] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642892] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642903] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642912] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642935] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.642947] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642956] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.642963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.642985] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.642990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.643000] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643008] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.643030] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.643035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.643042] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643050] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.643075] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.643081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.643087] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643096] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.643121] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.643126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.643132] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643141] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.643164] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.643170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.643176] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643184] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.265 [2024-11-19 05:27:23.643192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.265 [2024-11-19 05:27:23.643215] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.265 [2024-11-19 05:27:23.643221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:07.265 [2024-11-19 05:27:23.643227] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643236] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643257] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643271] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643279] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643302] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643314] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643323] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643346] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643358] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643366] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643388] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643399] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643408] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643429] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643441] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643450] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643471] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643483] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643491] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643515] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643528] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643540] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643563] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643575] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643584] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643615] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643626] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643635] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643664] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643676] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643684] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643713] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643725] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643733] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643759] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643770] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643779] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643800] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643811] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643820] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643842] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643853] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643862] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643883] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643895] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643903] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643924] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643935] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643944] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.643967] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.643972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.643979] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643987] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.266 [2024-11-19 05:27:23.643995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.266 [2024-11-19 05:27:23.644010] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.266 [2024-11-19 05:27:23.644016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:07.266 [2024-11-19 05:27:23.644022] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644031] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644053] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644065] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644074] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644101] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644113] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644121] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644144] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644156] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644165] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644190] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644201] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644210] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644235] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644247] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644255] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644277] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644289] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644297] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644322] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644334] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644342] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644366] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644378] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644386] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644409] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644421] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644429] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644457] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644468] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644477] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.644506] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.644511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.644517] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.644526] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.648539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.267 [2024-11-19 05:27:23.648561] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.267 [2024-11-19 05:27:23.648566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000d p:0 m:0 dnr:0 00:25:07.267 [2024-11-19 05:27:23.648572] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.648579] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:07.267 128 00:25:07.267 Transport Service Identifier: 4420 00:25:07.267 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:07.267 Transport Address: 192.168.100.8 00:25:07.267 Transport Specific Address Subtype - RDMA 00:25:07.267 RDMA QP Service Type: 1 (Reliable Connected) 00:25:07.267 RDMA Provider Type: 1 (No provider specified) 00:25:07.267 RDMA CM Service: 1 (RDMA_CM) 00:25:07.267 05:27:23 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:07.267 [2024-11-19 05:27:23.715856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:07.267 [2024-11-19 05:27:23.715893] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917633 ] 00:25:07.267 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.267 [2024-11-19 05:27:23.763722] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:07.267 [2024-11-19 05:27:23.763787] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:07.267 [2024-11-19 05:27:23.763802] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:07.267 [2024-11-19 05:27:23.763807] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:07.267 [2024-11-19 05:27:23.763831] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:07.267 [2024-11-19 05:27:23.782055] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:07.267 [2024-11-19 05:27:23.792112] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:07.267 [2024-11-19 05:27:23.792122] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:07.267 [2024-11-19 05:27:23.792129] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792136] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792142] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792148] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792154] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792160] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792166] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792172] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792178] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792184] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.267 [2024-11-19 05:27:23.792190] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792197] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792203] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792211] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792217] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792224] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792230] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792236] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792242] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792248] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792254] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792260] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792266] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792272] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792278] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792284] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792290] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792296] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792303] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792309] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792315] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792320] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:07.268 [2024-11-19 05:27:23.792326] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:07.268 [2024-11-19 05:27:23.792330] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:07.268 [2024-11-19 05:27:23.792345] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.792356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:07.268 [2024-11-19 05:27:23.797536] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.797544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.797551] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797558] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:07.268 [2024-11-19 05:27:23.797565] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:07.268 [2024-11-19 05:27:23.797571] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:07.268 [2024-11-19 05:27:23.797582] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.268 [2024-11-19 05:27:23.797606] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.797611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.797618] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:07.268 [2024-11-19 05:27:23.797624] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797630] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:07.268 [2024-11-19 05:27:23.797638] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.268 [2024-11-19 05:27:23.797670] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.797675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.797682] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:07.268 [2024-11-19 05:27:23.797688] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797695] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:07.268 [2024-11-19 05:27:23.797702] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.268 [2024-11-19 05:27:23.797730] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.797735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.797742] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:07.268 [2024-11-19 05:27:23.797748] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797756] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.268 [2024-11-19 05:27:23.797784] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.797789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.797795] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:07.268 [2024-11-19 05:27:23.797801] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:07.268 [2024-11-19 05:27:23.797807] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797814] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:07.268 [2024-11-19 05:27:23.797920] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:07.268 [2024-11-19 05:27:23.797925] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:07.268 [2024-11-19 05:27:23.797935] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.268 [2024-11-19 05:27:23.797960] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.797966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.797972] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:07.268 [2024-11-19 05:27:23.797978] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797986] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.797994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.268 [2024-11-19 05:27:23.798014] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.798019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.798025] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:07.268 [2024-11-19 05:27:23.798031] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:07.268 [2024-11-19 05:27:23.798037] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.798043] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:07.268 [2024-11-19 05:27:23.798052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:07.268 [2024-11-19 05:27:23.798060] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.798068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:07.268 [2024-11-19 05:27:23.798107] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.268 [2024-11-19 05:27:23.798113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:07.268 [2024-11-19 05:27:23.798121] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:07.268 [2024-11-19 05:27:23.798127] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:07.268 [2024-11-19 05:27:23.798133] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:07.268 [2024-11-19 05:27:23.798138] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:07.268 [2024-11-19 05:27:23.798143] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:07.268 [2024-11-19 05:27:23.798149] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:07.268 [2024-11-19 05:27:23.798155] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.268 [2024-11-19 05:27:23.798164] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798172] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.269 [2024-11-19 05:27:23.798199] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798212] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.269 [2024-11-19 05:27:23.798226] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.269 [2024-11-19 05:27:23.798240] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.269 [2024-11-19 05:27:23.798253] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.269 [2024-11-19 05:27:23.798266] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798272] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798282] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798289] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.269 [2024-11-19 05:27:23.798313] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798324] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:07.269 [2024-11-19 05:27:23.798330] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798336] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798343] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798352] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798359] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.269 [2024-11-19 05:27:23.798390] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798447] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798453] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798461] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798469] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184100 00:25:07.269 [2024-11-19 05:27:23.798503] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798520] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:07.269 [2024-11-19 05:27:23.798538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798545] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798552] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798560] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:07.269 [2024-11-19 05:27:23.798601] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798619] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798625] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798632] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798640] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:07.269 [2024-11-19 05:27:23.798676] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798690] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798695] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798719] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798725] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798731] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:07.269 [2024-11-19 05:27:23.798737] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:07.269 [2024-11-19 05:27:23.798743] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:07.269 [2024-11-19 05:27:23.798757] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.269 [2024-11-19 05:27:23.798772] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.269 [2024-11-19 05:27:23.798789] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798801] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798807] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798819] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798828] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.269 [2024-11-19 05:27:23.798858] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798870] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798879] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.269 [2024-11-19 05:27:23.798912] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798923] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798932] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.269 [2024-11-19 05:27:23.798962] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.269 [2024-11-19 05:27:23.798968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:25:07.269 [2024-11-19 05:27:23.798975] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:07.269 [2024-11-19 05:27:23.798986] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:07.270 [2024-11-19 05:27:23.798994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x184100 00:25:07.270 [2024-11-19 05:27:23.799002] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:07.270 [2024-11-19 05:27:23.799010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x184100 00:25:07.270 [2024-11-19 05:27:23.799018] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:07.270 [2024-11-19 05:27:23.799025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x184100 00:25:07.270 [2024-11-19 05:27:23.799033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:07.270 [2024-11-19 05:27:23.799041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x184100 00:25:07.270 [2024-11-19 05:27:23.799050] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.270 [2024-11-19 05:27:23.799055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:07.270 [2024-11-19 05:27:23.799067] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:07.270 [2024-11-19 05:27:23.799074] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.270 [2024-11-19 05:27:23.799079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:07.270 [2024-11-19 05:27:23.799088] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:07.270 [2024-11-19 05:27:23.799094] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.270 [2024-11-19 05:27:23.799099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:07.270 [2024-11-19 05:27:23.799106] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:07.270 [2024-11-19 05:27:23.799112] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.270 [2024-11-19 05:27:23.799117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:07.270 [2024-11-19 05:27:23.799128] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:07.270 ===================================================== 00:25:07.270 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.270 ===================================================== 00:25:07.270 Controller Capabilities/Features 00:25:07.270 ================================ 00:25:07.270 Vendor ID: 8086 00:25:07.270 Subsystem Vendor ID: 8086 00:25:07.270 Serial Number: SPDK00000000000001 00:25:07.270 Model Number: SPDK bdev Controller 00:25:07.270 Firmware Version: 24.01.1 00:25:07.270 Recommended Arb Burst: 6 00:25:07.270 IEEE OUI Identifier: e4 d2 5c 00:25:07.270 Multi-path I/O 00:25:07.270 May have multiple subsystem ports: Yes 00:25:07.270 May have multiple controllers: Yes 00:25:07.270 Associated with SR-IOV VF: No 00:25:07.270 Max Data Transfer Size: 131072 00:25:07.270 Max Number of Namespaces: 32 00:25:07.270 Max Number of I/O Queues: 127 00:25:07.270 NVMe Specification Version (VS): 1.3 00:25:07.270 NVMe Specification Version (Identify): 1.3 00:25:07.270 Maximum Queue Entries: 128 00:25:07.270 Contiguous Queues Required: Yes 00:25:07.270 Arbitration Mechanisms Supported 00:25:07.270 Weighted Round Robin: Not Supported 00:25:07.270 Vendor Specific: Not Supported 00:25:07.270 Reset Timeout: 15000 ms 00:25:07.270 Doorbell Stride: 4 bytes 00:25:07.270 NVM Subsystem Reset: Not Supported 00:25:07.270 Command Sets Supported 00:25:07.270 NVM Command Set: Supported 00:25:07.270 Boot Partition: Not Supported 00:25:07.270 Memory Page Size Minimum: 4096 bytes 00:25:07.270 Memory Page Size Maximum: 4096 bytes 00:25:07.270 Persistent Memory Region: Not Supported 00:25:07.270 Optional Asynchronous Events Supported 00:25:07.270 Namespace Attribute Notices: Supported 00:25:07.270 Firmware Activation Notices: Not Supported 00:25:07.270 ANA Change Notices: Not Supported 00:25:07.270 PLE Aggregate Log Change Notices: Not Supported 00:25:07.270 LBA Status Info Alert Notices: Not Supported 00:25:07.270 EGE Aggregate Log Change Notices: Not Supported 00:25:07.270 Normal NVM Subsystem Shutdown event: Not Supported 00:25:07.270 Zone Descriptor Change Notices: Not Supported 00:25:07.270 Discovery Log Change Notices: Not Supported 00:25:07.270 Controller Attributes 00:25:07.270 128-bit Host Identifier: Supported 00:25:07.270 Non-Operational Permissive Mode: Not Supported 00:25:07.270 NVM Sets: Not Supported 00:25:07.270 Read Recovery Levels: Not Supported 00:25:07.270 Endurance Groups: Not Supported 00:25:07.270 Predictable Latency Mode: Not Supported 00:25:07.270 Traffic Based Keep ALive: Not Supported 00:25:07.270 Namespace Granularity: Not Supported 00:25:07.270 SQ Associations: Not Supported 00:25:07.270 UUID List: Not Supported 00:25:07.270 Multi-Domain Subsystem: Not Supported 00:25:07.270 Fixed Capacity Management: Not Supported 00:25:07.270 Variable Capacity Management: Not Supported 00:25:07.270 Delete Endurance Group: Not Supported 00:25:07.270 Delete NVM Set: Not Supported 00:25:07.270 Extended LBA Formats Supported: Not Supported 00:25:07.270 Flexible Data Placement Supported: Not Supported 00:25:07.270 00:25:07.270 Controller Memory Buffer Support 00:25:07.270 ================================ 00:25:07.270 Supported: No 00:25:07.270 00:25:07.270 Persistent Memory Region Support 00:25:07.270 ================================ 00:25:07.270 Supported: No 00:25:07.270 00:25:07.270 Admin Command Set Attributes 00:25:07.270 ============================ 00:25:07.270 Security Send/Receive: Not Supported 00:25:07.270 Format NVM: Not Supported 00:25:07.270 Firmware Activate/Download: Not Supported 00:25:07.270 Namespace Management: Not Supported 00:25:07.270 Device Self-Test: Not Supported 00:25:07.270 Directives: Not Supported 00:25:07.270 NVMe-MI: Not Supported 00:25:07.270 Virtualization Management: Not Supported 00:25:07.270 Doorbell Buffer Config: Not Supported 00:25:07.270 Get LBA Status Capability: Not Supported 00:25:07.270 Command & Feature Lockdown Capability: Not Supported 00:25:07.270 Abort Command Limit: 4 00:25:07.270 Async Event Request Limit: 4 00:25:07.270 Number of Firmware Slots: N/A 00:25:07.270 Firmware Slot 1 Read-Only: N/A 00:25:07.270 Firmware Activation Without Reset: N/A 00:25:07.270 Multiple Update Detection Support: N/A 00:25:07.270 Firmware Update Granularity: No Information Provided 00:25:07.270 Per-Namespace SMART Log: No 00:25:07.270 Asymmetric Namespace Access Log Page: Not Supported 00:25:07.270 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:07.270 Command Effects Log Page: Supported 00:25:07.270 Get Log Page Extended Data: Supported 00:25:07.270 Telemetry Log Pages: Not Supported 00:25:07.270 Persistent Event Log Pages: Not Supported 00:25:07.270 Supported Log Pages Log Page: May Support 00:25:07.270 Commands Supported & Effects Log Page: Not Supported 00:25:07.270 Feature Identifiers & Effects Log Page:May Support 00:25:07.270 NVMe-MI Commands & Effects Log Page: May Support 00:25:07.270 Data Area 4 for Telemetry Log: Not Supported 00:25:07.270 Error Log Page Entries Supported: 128 00:25:07.270 Keep Alive: Supported 00:25:07.270 Keep Alive Granularity: 10000 ms 00:25:07.270 00:25:07.270 NVM Command Set Attributes 00:25:07.270 ========================== 00:25:07.270 Submission Queue Entry Size 00:25:07.270 Max: 64 00:25:07.270 Min: 64 00:25:07.270 Completion Queue Entry Size 00:25:07.270 Max: 16 00:25:07.270 Min: 16 00:25:07.270 Number of Namespaces: 32 00:25:07.270 Compare Command: Supported 00:25:07.270 Write Uncorrectable Command: Not Supported 00:25:07.270 Dataset Management Command: Supported 00:25:07.270 Write Zeroes Command: Supported 00:25:07.270 Set Features Save Field: Not Supported 00:25:07.271 Reservations: Supported 00:25:07.271 Timestamp: Not Supported 00:25:07.271 Copy: Supported 00:25:07.271 Volatile Write Cache: Present 00:25:07.271 Atomic Write Unit (Normal): 1 00:25:07.271 Atomic Write Unit (PFail): 1 00:25:07.271 Atomic Compare & Write Unit: 1 00:25:07.271 Fused Compare & Write: Supported 00:25:07.271 Scatter-Gather List 00:25:07.271 SGL Command Set: Supported 00:25:07.271 SGL Keyed: Supported 00:25:07.271 SGL Bit Bucket Descriptor: Not Supported 00:25:07.271 SGL Metadata Pointer: Not Supported 00:25:07.271 Oversized SGL: Not Supported 00:25:07.271 SGL Metadata Address: Not Supported 00:25:07.271 SGL Offset: Supported 00:25:07.271 Transport SGL Data Block: Not Supported 00:25:07.271 Replay Protected Memory Block: Not Supported 00:25:07.271 00:25:07.271 Firmware Slot Information 00:25:07.271 ========================= 00:25:07.271 Active slot: 1 00:25:07.271 Slot 1 Firmware Revision: 24.01.1 00:25:07.271 00:25:07.271 00:25:07.271 Commands Supported and Effects 00:25:07.271 ============================== 00:25:07.271 Admin Commands 00:25:07.271 -------------- 00:25:07.271 Get Log Page (02h): Supported 00:25:07.271 Identify (06h): Supported 00:25:07.271 Abort (08h): Supported 00:25:07.271 Set Features (09h): Supported 00:25:07.271 Get Features (0Ah): Supported 00:25:07.271 Asynchronous Event Request (0Ch): Supported 00:25:07.271 Keep Alive (18h): Supported 00:25:07.271 I/O Commands 00:25:07.271 ------------ 00:25:07.271 Flush (00h): Supported LBA-Change 00:25:07.271 Write (01h): Supported LBA-Change 00:25:07.271 Read (02h): Supported 00:25:07.271 Compare (05h): Supported 00:25:07.271 Write Zeroes (08h): Supported LBA-Change 00:25:07.271 Dataset Management (09h): Supported LBA-Change 00:25:07.271 Copy (19h): Supported LBA-Change 00:25:07.271 Unknown (79h): Supported LBA-Change 00:25:07.271 Unknown (7Ah): Supported 00:25:07.271 00:25:07.271 Error Log 00:25:07.271 ========= 00:25:07.271 00:25:07.271 Arbitration 00:25:07.271 =========== 00:25:07.271 Arbitration Burst: 1 00:25:07.271 00:25:07.271 Power Management 00:25:07.271 ================ 00:25:07.271 Number of Power States: 1 00:25:07.271 Current Power State: Power State #0 00:25:07.271 Power State #0: 00:25:07.271 Max Power: 0.00 W 00:25:07.271 Non-Operational State: Operational 00:25:07.271 Entry Latency: Not Reported 00:25:07.271 Exit Latency: Not Reported 00:25:07.271 Relative Read Throughput: 0 00:25:07.271 Relative Read Latency: 0 00:25:07.271 Relative Write Throughput: 0 00:25:07.271 Relative Write Latency: 0 00:25:07.271 Idle Power: Not Reported 00:25:07.271 Active Power: Not Reported 00:25:07.271 Non-Operational Permissive Mode: Not Supported 00:25:07.271 00:25:07.271 Health Information 00:25:07.271 ================== 00:25:07.271 Critical Warnings: 00:25:07.271 Available Spare Space: OK 00:25:07.271 Temperature: OK 00:25:07.271 Device Reliability: OK 00:25:07.271 Read Only: No 00:25:07.271 Volatile Memory Backup: OK 00:25:07.271 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:07.271 Temperature Threshol[2024-11-19 05:27:23.799212] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799241] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799253] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799278] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:07.271 [2024-11-19 05:27:23.799289] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9709 doesn't match qid 00:25:07.271 [2024-11-19 05:27:23.799302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799309] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9709 doesn't match qid 00:25:07.271 [2024-11-19 05:27:23.799316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799323] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9709 doesn't match qid 00:25:07.271 [2024-11-19 05:27:23.799330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799337] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9709 doesn't match qid 00:25:07.271 [2024-11-19 05:27:23.799344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799353] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799385] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799398] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799412] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799434] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799446] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:07.271 [2024-11-19 05:27:23.799452] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:07.271 [2024-11-19 05:27:23.799458] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799466] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799490] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799502] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799511] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799542] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799554] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799564] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799590] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799602] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799611] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799637] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799648] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799657] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799686] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:07.271 [2024-11-19 05:27:23.799698] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799707] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.271 [2024-11-19 05:27:23.799715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.271 [2024-11-19 05:27:23.799731] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.271 [2024-11-19 05:27:23.799736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.799743] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799752] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.799777] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.799783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.799789] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799798] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.799823] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.799829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.799837] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799845] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.799871] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.799876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.799883] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799891] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.799917] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.799922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.799928] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799937] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.799960] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.799966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.799972] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799981] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.799988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800006] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800018] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800026] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800048] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800060] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800068] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800090] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800103] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800111] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800135] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800147] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800156] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800181] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800193] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800201] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800225] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800237] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800245] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800273] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800284] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800293] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800318] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800330] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800339] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800360] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800373] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800382] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800407] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800419] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800427] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800449] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800461] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800469] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800498] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.272 [2024-11-19 05:27:23.800504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:07.272 [2024-11-19 05:27:23.800510] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800519] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.272 [2024-11-19 05:27:23.800526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.272 [2024-11-19 05:27:23.800543] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800555] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800564] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800585] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800597] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800606] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800629] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800642] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800651] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800672] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800684] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800693] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800716] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800728] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800736] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800760] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800772] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800780] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800802] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800813] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800822] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800855] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800867] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800876] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800898] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800910] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800919] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800946] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.800958] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800967] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.800974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.800992] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.800997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801004] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801012] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801036] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801047] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801056] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801085] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801097] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801105] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801131] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801143] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801151] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801176] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801187] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801196] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801227] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801239] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801248] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801271] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801283] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801291] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801313] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:07.273 [2024-11-19 05:27:23.801324] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801333] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.273 [2024-11-19 05:27:23.801341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.273 [2024-11-19 05:27:23.801364] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.273 [2024-11-19 05:27:23.801369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:07.274 [2024-11-19 05:27:23.801376] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801384] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.274 [2024-11-19 05:27:23.801410] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.274 [2024-11-19 05:27:23.801415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:07.274 [2024-11-19 05:27:23.801421] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801430] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.274 [2024-11-19 05:27:23.801452] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.274 [2024-11-19 05:27:23.801458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:07.274 [2024-11-19 05:27:23.801464] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801473] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.274 [2024-11-19 05:27:23.801495] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.274 [2024-11-19 05:27:23.801500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:07.274 [2024-11-19 05:27:23.801506] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801515] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.801523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.274 [2024-11-19 05:27:23.805535] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.274 [2024-11-19 05:27:23.805543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:07.274 [2024-11-19 05:27:23.805549] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.805558] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.805566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:07.274 [2024-11-19 05:27:23.805582] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:07.274 [2024-11-19 05:27:23.805587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:25:07.274 [2024-11-19 05:27:23.805593] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:07.274 [2024-11-19 05:27:23.805600] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:07.532 d: 0 Kelvin (-273 Celsius) 00:25:07.532 Available Spare: 0% 00:25:07.532 Available Spare Threshold: 0% 00:25:07.532 Life Percentage Used: 0% 00:25:07.532 Data Units Read: 0 00:25:07.532 Data Units Written: 0 00:25:07.532 Host Read Commands: 0 00:25:07.532 Host Write Commands: 0 00:25:07.532 Controller Busy Time: 0 minutes 00:25:07.532 Power Cycles: 0 00:25:07.532 Power On Hours: 0 hours 00:25:07.532 Unsafe Shutdowns: 0 00:25:07.532 Unrecoverable Media Errors: 0 00:25:07.532 Lifetime Error Log Entries: 0 00:25:07.532 Warning Temperature Time: 0 minutes 00:25:07.532 Critical Temperature Time: 0 minutes 00:25:07.532 00:25:07.532 Number of Queues 00:25:07.532 ================ 00:25:07.532 Number of I/O Submission Queues: 127 00:25:07.532 Number of I/O Completion Queues: 127 00:25:07.532 00:25:07.532 Active Namespaces 00:25:07.532 ================= 00:25:07.532 Namespace ID:1 00:25:07.532 Error Recovery Timeout: Unlimited 00:25:07.532 Command Set Identifier: NVM (00h) 00:25:07.532 Deallocate: Supported 00:25:07.532 Deallocated/Unwritten Error: Not Supported 00:25:07.532 Deallocated Read Value: Unknown 00:25:07.532 Deallocate in Write Zeroes: Not Supported 00:25:07.532 Deallocated Guard Field: 0xFFFF 00:25:07.532 Flush: Supported 00:25:07.532 Reservation: Supported 00:25:07.532 Namespace Sharing Capabilities: Multiple Controllers 00:25:07.532 Size (in LBAs): 131072 (0GiB) 00:25:07.532 Capacity (in LBAs): 131072 (0GiB) 00:25:07.532 Utilization (in LBAs): 131072 (0GiB) 00:25:07.532 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:07.532 EUI64: ABCDEF0123456789 00:25:07.532 UUID: 2c368676-8536-4535-b246-2a208dce9c90 00:25:07.532 Thin Provisioning: Not Supported 00:25:07.532 Per-NS Atomic Units: Yes 00:25:07.532 Atomic Boundary Size (Normal): 0 00:25:07.532 Atomic Boundary Size (PFail): 0 00:25:07.532 Atomic Boundary Offset: 0 00:25:07.532 Maximum Single Source Range Length: 65535 00:25:07.532 Maximum Copy Length: 65535 00:25:07.532 Maximum Source Range Count: 1 00:25:07.532 NGUID/EUI64 Never Reused: No 00:25:07.532 Namespace Write Protected: No 00:25:07.532 Number of LBA Formats: 1 00:25:07.532 Current LBA Format: LBA Format #00 00:25:07.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:07.532 00:25:07.532 05:27:23 -- host/identify.sh@51 -- # sync 00:25:07.532 05:27:23 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.532 05:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.532 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.532 05:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.532 05:27:23 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:07.532 05:27:23 -- host/identify.sh@56 -- # nvmftestfini 00:25:07.532 05:27:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.532 05:27:23 -- nvmf/common.sh@116 -- # sync 00:25:07.532 05:27:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:07.532 05:27:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:07.532 05:27:23 -- nvmf/common.sh@119 -- # set +e 00:25:07.532 05:27:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.532 05:27:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:07.532 rmmod nvme_rdma 00:25:07.532 rmmod nvme_fabrics 00:25:07.532 05:27:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:07.532 05:27:23 -- nvmf/common.sh@123 -- # set -e 00:25:07.532 05:27:23 -- nvmf/common.sh@124 -- # return 0 00:25:07.532 05:27:23 -- nvmf/common.sh@477 -- # '[' -n 1917350 ']' 00:25:07.532 05:27:23 -- nvmf/common.sh@478 -- # killprocess 1917350 00:25:07.532 05:27:23 -- common/autotest_common.sh@936 -- # '[' -z 1917350 ']' 00:25:07.532 05:27:23 -- common/autotest_common.sh@940 -- # kill -0 1917350 00:25:07.532 05:27:23 -- common/autotest_common.sh@941 -- # uname 00:25:07.532 05:27:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:07.532 05:27:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1917350 00:25:07.532 05:27:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:07.532 05:27:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:07.532 05:27:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1917350' 00:25:07.532 killing process with pid 1917350 00:25:07.532 05:27:23 -- common/autotest_common.sh@955 -- # kill 1917350 00:25:07.532 [2024-11-19 05:27:23.981779] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:07.532 05:27:23 -- common/autotest_common.sh@960 -- # wait 1917350 00:25:07.789 05:27:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.789 05:27:24 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:07.789 00:25:07.789 real 0m8.820s 00:25:07.789 user 0m8.682s 00:25:07.789 sys 0m5.654s 00:25:07.789 05:27:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:07.789 05:27:24 -- common/autotest_common.sh@10 -- # set +x 00:25:07.789 ************************************ 00:25:07.789 END TEST nvmf_identify 00:25:07.789 ************************************ 00:25:07.789 05:27:24 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:07.789 05:27:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:07.789 05:27:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:07.789 05:27:24 -- common/autotest_common.sh@10 -- # set +x 00:25:07.789 ************************************ 00:25:07.789 START TEST nvmf_perf 00:25:07.789 ************************************ 00:25:07.789 05:27:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:08.047 * Looking for test storage... 00:25:08.047 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:08.047 05:27:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:08.047 05:27:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:08.047 05:27:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:08.047 05:27:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:08.047 05:27:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:08.047 05:27:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:08.047 05:27:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:08.047 05:27:24 -- scripts/common.sh@335 -- # IFS=.-: 00:25:08.047 05:27:24 -- scripts/common.sh@335 -- # read -ra ver1 00:25:08.047 05:27:24 -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.047 05:27:24 -- scripts/common.sh@336 -- # read -ra ver2 00:25:08.047 05:27:24 -- scripts/common.sh@337 -- # local 'op=<' 00:25:08.047 05:27:24 -- scripts/common.sh@339 -- # ver1_l=2 00:25:08.047 05:27:24 -- scripts/common.sh@340 -- # ver2_l=1 00:25:08.047 05:27:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:08.047 05:27:24 -- scripts/common.sh@343 -- # case "$op" in 00:25:08.047 05:27:24 -- scripts/common.sh@344 -- # : 1 00:25:08.047 05:27:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:08.047 05:27:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.047 05:27:24 -- scripts/common.sh@364 -- # decimal 1 00:25:08.047 05:27:24 -- scripts/common.sh@352 -- # local d=1 00:25:08.047 05:27:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.047 05:27:24 -- scripts/common.sh@354 -- # echo 1 00:25:08.047 05:27:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:08.047 05:27:24 -- scripts/common.sh@365 -- # decimal 2 00:25:08.047 05:27:24 -- scripts/common.sh@352 -- # local d=2 00:25:08.047 05:27:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.047 05:27:24 -- scripts/common.sh@354 -- # echo 2 00:25:08.047 05:27:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:08.047 05:27:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:08.047 05:27:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:08.047 05:27:24 -- scripts/common.sh@367 -- # return 0 00:25:08.047 05:27:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.047 05:27:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:08.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.047 --rc genhtml_branch_coverage=1 00:25:08.047 --rc genhtml_function_coverage=1 00:25:08.047 --rc genhtml_legend=1 00:25:08.047 --rc geninfo_all_blocks=1 00:25:08.047 --rc geninfo_unexecuted_blocks=1 00:25:08.047 00:25:08.047 ' 00:25:08.047 05:27:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:08.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.047 --rc genhtml_branch_coverage=1 00:25:08.047 --rc genhtml_function_coverage=1 00:25:08.047 --rc genhtml_legend=1 00:25:08.047 --rc geninfo_all_blocks=1 00:25:08.047 --rc geninfo_unexecuted_blocks=1 00:25:08.047 00:25:08.047 ' 00:25:08.047 05:27:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:08.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.047 --rc genhtml_branch_coverage=1 00:25:08.047 --rc genhtml_function_coverage=1 00:25:08.047 --rc genhtml_legend=1 00:25:08.047 --rc geninfo_all_blocks=1 00:25:08.047 --rc geninfo_unexecuted_blocks=1 00:25:08.047 00:25:08.047 ' 00:25:08.047 05:27:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:08.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.047 --rc genhtml_branch_coverage=1 00:25:08.047 --rc genhtml_function_coverage=1 00:25:08.047 --rc genhtml_legend=1 00:25:08.047 --rc geninfo_all_blocks=1 00:25:08.047 --rc geninfo_unexecuted_blocks=1 00:25:08.047 00:25:08.047 ' 00:25:08.047 05:27:24 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.047 05:27:24 -- nvmf/common.sh@7 -- # uname -s 00:25:08.047 05:27:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.047 05:27:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.047 05:27:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.047 05:27:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.047 05:27:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.047 05:27:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.047 05:27:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.047 05:27:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.047 05:27:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.047 05:27:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.047 05:27:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:08.047 05:27:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:08.047 05:27:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.047 05:27:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.047 05:27:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.047 05:27:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:08.047 05:27:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.047 05:27:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.047 05:27:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.047 05:27:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.047 05:27:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.048 05:27:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.048 05:27:24 -- paths/export.sh@5 -- # export PATH 00:25:08.048 05:27:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.048 05:27:24 -- nvmf/common.sh@46 -- # : 0 00:25:08.048 05:27:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:08.048 05:27:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:08.048 05:27:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:08.048 05:27:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.048 05:27:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.048 05:27:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:08.048 05:27:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:08.048 05:27:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:08.048 05:27:24 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:08.048 05:27:24 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:08.048 05:27:24 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:08.048 05:27:24 -- host/perf.sh@17 -- # nvmftestinit 00:25:08.048 05:27:24 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:08.048 05:27:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.048 05:27:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:08.048 05:27:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:08.048 05:27:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:08.048 05:27:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.048 05:27:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.048 05:27:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.048 05:27:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:08.048 05:27:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:08.048 05:27:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:08.048 05:27:24 -- common/autotest_common.sh@10 -- # set +x 00:25:14.709 05:27:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:14.709 05:27:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:14.709 05:27:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:14.709 05:27:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:14.709 05:27:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:14.709 05:27:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:14.709 05:27:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:14.709 05:27:30 -- nvmf/common.sh@294 -- # net_devs=() 00:25:14.709 05:27:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:14.709 05:27:30 -- nvmf/common.sh@295 -- # e810=() 00:25:14.709 05:27:30 -- nvmf/common.sh@295 -- # local -ga e810 00:25:14.709 05:27:30 -- nvmf/common.sh@296 -- # x722=() 00:25:14.709 05:27:30 -- nvmf/common.sh@296 -- # local -ga x722 00:25:14.709 05:27:30 -- nvmf/common.sh@297 -- # mlx=() 00:25:14.709 05:27:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:14.709 05:27:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.709 05:27:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:14.709 05:27:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:14.709 05:27:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:14.709 05:27:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:14.709 05:27:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:14.709 05:27:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.709 05:27:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:14.709 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:14.709 05:27:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:14.709 05:27:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.709 05:27:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:14.709 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:14.709 05:27:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:14.709 05:27:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:14.709 05:27:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:14.709 05:27:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.709 05:27:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.709 05:27:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.709 05:27:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.709 05:27:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:14.709 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:14.709 05:27:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.709 05:27:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.709 05:27:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.709 05:27:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.709 05:27:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.709 05:27:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:14.709 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:14.710 05:27:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.710 05:27:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:14.710 05:27:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:14.710 05:27:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:14.710 05:27:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:14.710 05:27:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:14.710 05:27:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:14.710 05:27:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:14.710 05:27:30 -- nvmf/common.sh@57 -- # uname 00:25:14.710 05:27:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:14.710 05:27:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:14.710 05:27:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:14.710 05:27:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:14.710 05:27:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:14.710 05:27:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:14.710 05:27:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:14.710 05:27:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:14.710 05:27:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:14.710 05:27:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:14.710 05:27:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:14.710 05:27:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:14.710 05:27:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:14.710 05:27:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:14.710 05:27:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:14.710 05:27:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:14.710 05:27:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@104 -- # continue 2 00:25:14.710 05:27:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@104 -- # continue 2 00:25:14.710 05:27:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:14.710 05:27:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.710 05:27:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:14.710 05:27:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:14.710 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:14.710 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:14.710 altname enp217s0f0np0 00:25:14.710 altname ens818f0np0 00:25:14.710 inet 192.168.100.8/24 scope global mlx_0_0 00:25:14.710 valid_lft forever preferred_lft forever 00:25:14.710 05:27:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:14.710 05:27:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.710 05:27:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:14.710 05:27:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:14.710 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:14.710 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:14.710 altname enp217s0f1np1 00:25:14.710 altname ens818f1np1 00:25:14.710 inet 192.168.100.9/24 scope global mlx_0_1 00:25:14.710 valid_lft forever preferred_lft forever 00:25:14.710 05:27:31 -- nvmf/common.sh@410 -- # return 0 00:25:14.710 05:27:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:14.710 05:27:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:14.710 05:27:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:14.710 05:27:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:14.710 05:27:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:14.710 05:27:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:14.710 05:27:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:14.710 05:27:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:14.710 05:27:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:14.710 05:27:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@104 -- # continue 2 00:25:14.710 05:27:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.710 05:27:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:14.710 05:27:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@104 -- # continue 2 00:25:14.710 05:27:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:14.710 05:27:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.710 05:27:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:14.710 05:27:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.710 05:27:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.710 05:27:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:14.710 192.168.100.9' 00:25:14.710 05:27:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:14.710 192.168.100.9' 00:25:14.710 05:27:31 -- nvmf/common.sh@445 -- # head -n 1 00:25:14.710 05:27:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:14.710 05:27:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:14.710 192.168.100.9' 00:25:14.710 05:27:31 -- nvmf/common.sh@446 -- # tail -n +2 00:25:14.710 05:27:31 -- nvmf/common.sh@446 -- # head -n 1 00:25:14.710 05:27:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:14.710 05:27:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:14.710 05:27:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:14.710 05:27:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:14.710 05:27:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:14.710 05:27:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:14.710 05:27:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:14.710 05:27:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:14.710 05:27:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.710 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.710 05:27:31 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:14.710 05:27:31 -- nvmf/common.sh@469 -- # nvmfpid=1921076 00:25:14.710 05:27:31 -- nvmf/common.sh@470 -- # waitforlisten 1921076 00:25:14.710 05:27:31 -- common/autotest_common.sh@829 -- # '[' -z 1921076 ']' 00:25:14.710 05:27:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.710 05:27:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.710 05:27:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.710 05:27:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.710 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.710 [2024-11-19 05:27:31.238998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:14.710 [2024-11-19 05:27:31.239045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.710 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.970 [2024-11-19 05:27:31.307546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.970 [2024-11-19 05:27:31.345521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.970 [2024-11-19 05:27:31.345647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.970 [2024-11-19 05:27:31.345656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.970 [2024-11-19 05:27:31.345665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.970 [2024-11-19 05:27:31.348549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.970 [2024-11-19 05:27:31.348563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.970 [2024-11-19 05:27:31.348652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.970 [2024-11-19 05:27:31.348654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.538 05:27:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.538 05:27:32 -- common/autotest_common.sh@862 -- # return 0 00:25:15.538 05:27:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:15.538 05:27:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.798 05:27:32 -- common/autotest_common.sh@10 -- # set +x 00:25:15.798 05:27:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.798 05:27:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:15.798 05:27:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:19.089 05:27:35 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:19.089 05:27:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:19.089 05:27:35 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:19.089 05:27:35 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:19.089 05:27:35 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:19.089 05:27:35 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:19.089 05:27:35 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:19.089 05:27:35 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:25:19.089 05:27:35 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:25:19.349 [2024-11-19 05:27:35.757267] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:25:19.349 [2024-11-19 05:27:35.777695] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xee7490/0xef5b40) succeed. 00:25:19.349 [2024-11-19 05:27:35.786917] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xee8a80/0xf371e0) succeed. 00:25:19.349 05:27:35 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.608 05:27:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:19.608 05:27:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:19.867 05:27:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:19.867 05:27:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:20.126 05:27:36 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:20.126 [2024-11-19 05:27:36.621408] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:20.126 05:27:36 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:20.386 05:27:36 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:20.386 05:27:36 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:20.386 05:27:36 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:20.386 05:27:36 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:21.765 Initializing NVMe Controllers 00:25:21.765 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:21.765 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:21.765 Initialization complete. Launching workers. 00:25:21.765 ======================================================== 00:25:21.765 Latency(us) 00:25:21.765 Device Information : IOPS MiB/s Average min max 00:25:21.765 PCIE (0000:d8:00.0) NSID 1 from core 0: 103287.39 403.47 309.48 9.97 4229.38 00:25:21.765 ======================================================== 00:25:21.765 Total : 103287.39 403.47 309.48 9.97 4229.38 00:25:21.765 00:25:21.765 05:27:38 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:21.765 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.054 Initializing NVMe Controllers 00:25:25.054 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.054 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.054 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.054 Initialization complete. Launching workers. 00:25:25.054 ======================================================== 00:25:25.054 Latency(us) 00:25:25.054 Device Information : IOPS MiB/s Average min max 00:25:25.054 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6823.97 26.66 146.34 48.90 6017.19 00:25:25.054 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5316.82 20.77 187.14 68.65 6065.64 00:25:25.054 ======================================================== 00:25:25.054 Total : 12140.80 47.42 164.21 48.90 6065.64 00:25:25.054 00:25:25.054 05:27:41 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:25.054 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.344 Initializing NVMe Controllers 00:25:28.344 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.344 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:28.344 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:28.344 Initialization complete. Launching workers. 00:25:28.344 ======================================================== 00:25:28.344 Latency(us) 00:25:28.344 Device Information : IOPS MiB/s Average min max 00:25:28.344 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19425.98 75.88 1647.54 435.45 5389.20 00:25:28.344 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7979.24 7751.31 8200.29 00:25:28.344 ======================================================== 00:25:28.344 Total : 23457.98 91.63 2735.85 435.45 8200.29 00:25:28.344 00:25:28.344 05:27:44 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:28.344 05:27:44 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:28.604 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.798 Initializing NVMe Controllers 00:25:32.798 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.798 Controller IO queue size 128, less than required. 00:25:32.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.798 Controller IO queue size 128, less than required. 00:25:32.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.798 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.798 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:32.798 Initialization complete. Launching workers. 00:25:32.798 ======================================================== 00:25:32.798 Latency(us) 00:25:32.798 Device Information : IOPS MiB/s Average min max 00:25:32.798 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4064.50 1016.12 31576.01 14051.16 69790.56 00:25:32.799 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4165.50 1041.38 30557.10 14344.68 51954.18 00:25:32.799 ======================================================== 00:25:32.799 Total : 8230.00 2057.50 31060.31 14051.16 69790.56 00:25:32.799 00:25:32.799 05:27:49 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:32.799 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.058 No valid NVMe controllers or AIO or URING devices found 00:25:33.058 Initializing NVMe Controllers 00:25:33.058 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.058 Controller IO queue size 128, less than required. 00:25:33.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.058 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:33.058 Controller IO queue size 128, less than required. 00:25:33.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.058 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:33.058 WARNING: Some requested NVMe devices were skipped 00:25:33.058 05:27:49 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:33.316 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.511 Initializing NVMe Controllers 00:25:37.511 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.511 Controller IO queue size 128, less than required. 00:25:37.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:37.511 Controller IO queue size 128, less than required. 00:25:37.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:37.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:37.511 Initialization complete. Launching workers. 00:25:37.511 00:25:37.511 ==================== 00:25:37.511 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:37.511 RDMA transport: 00:25:37.511 dev name: mlx5_0 00:25:37.511 polls: 421328 00:25:37.511 idle_polls: 417253 00:25:37.511 completions: 46167 00:25:37.511 queued_requests: 1 00:25:37.511 total_send_wrs: 23147 00:25:37.511 send_doorbell_updates: 3882 00:25:37.511 total_recv_wrs: 23147 00:25:37.511 recv_doorbell_updates: 3882 00:25:37.511 --------------------------------- 00:25:37.511 00:25:37.511 ==================== 00:25:37.511 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:37.511 RDMA transport: 00:25:37.511 dev name: mlx5_0 00:25:37.511 polls: 421320 00:25:37.511 idle_polls: 421045 00:25:37.511 completions: 20533 00:25:37.511 queued_requests: 1 00:25:37.511 total_send_wrs: 10345 00:25:37.511 send_doorbell_updates: 256 00:25:37.511 total_recv_wrs: 10345 00:25:37.511 recv_doorbell_updates: 257 00:25:37.511 --------------------------------- 00:25:37.511 ======================================================== 00:25:37.511 Latency(us) 00:25:37.511 Device Information : IOPS MiB/s Average min max 00:25:37.511 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5811.93 1452.98 22048.70 11073.68 51655.10 00:25:37.511 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2615.04 653.76 49050.44 24842.82 71705.13 00:25:37.511 ======================================================== 00:25:37.511 Total : 8426.98 2106.74 30427.84 11073.68 71705.13 00:25:37.511 00:25:37.511 05:27:53 -- host/perf.sh@66 -- # sync 00:25:37.511 05:27:53 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.770 05:27:54 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:37.770 05:27:54 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:37.770 05:27:54 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:44.340 05:28:00 -- host/perf.sh@72 -- # ls_guid=d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd 00:25:44.340 05:28:00 -- host/perf.sh@73 -- # get_lvs_free_mb d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd 00:25:44.340 05:28:00 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd 00:25:44.340 05:28:00 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:44.340 05:28:00 -- common/autotest_common.sh@1355 -- # local fc 00:25:44.340 05:28:00 -- common/autotest_common.sh@1356 -- # local cs 00:25:44.340 05:28:00 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:44.340 05:28:00 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:44.340 { 00:25:44.340 "uuid": "d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd", 00:25:44.340 "name": "lvs_0", 00:25:44.340 "base_bdev": "Nvme0n1", 00:25:44.340 "total_data_clusters": 476466, 00:25:44.340 "free_clusters": 476466, 00:25:44.340 "block_size": 512, 00:25:44.340 "cluster_size": 4194304 00:25:44.340 } 00:25:44.340 ]' 00:25:44.340 05:28:00 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd") .free_clusters' 00:25:44.340 05:28:00 -- common/autotest_common.sh@1358 -- # fc=476466 00:25:44.340 05:28:00 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd") .cluster_size' 00:25:44.340 05:28:00 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:44.340 05:28:00 -- common/autotest_common.sh@1362 -- # free_mb=1905864 00:25:44.340 05:28:00 -- common/autotest_common.sh@1363 -- # echo 1905864 00:25:44.340 1905864 00:25:44.340 05:28:00 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:44.340 05:28:00 -- host/perf.sh@78 -- # free_mb=20480 00:25:44.340 05:28:00 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd lbd_0 20480 00:25:44.599 05:28:00 -- host/perf.sh@80 -- # lb_guid=4c847a31-035f-4cfe-8d7a-1b5dda3b6668 00:25:44.599 05:28:00 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4c847a31-035f-4cfe-8d7a-1b5dda3b6668 lvs_n_0 00:25:46.501 05:28:02 -- host/perf.sh@83 -- # ls_nested_guid=f9e0d4ae-d47f-4c0c-8f66-ebd21c7fc035 00:25:46.501 05:28:02 -- host/perf.sh@84 -- # get_lvs_free_mb f9e0d4ae-d47f-4c0c-8f66-ebd21c7fc035 00:25:46.501 05:28:02 -- common/autotest_common.sh@1353 -- # local lvs_uuid=f9e0d4ae-d47f-4c0c-8f66-ebd21c7fc035 00:25:46.501 05:28:02 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:46.501 05:28:02 -- common/autotest_common.sh@1355 -- # local fc 00:25:46.501 05:28:02 -- common/autotest_common.sh@1356 -- # local cs 00:25:46.501 05:28:02 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:46.760 05:28:03 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:46.760 { 00:25:46.760 "uuid": "d9a75ed0-82d6-4af2-a5d4-6c5c35b0addd", 00:25:46.760 "name": "lvs_0", 00:25:46.760 "base_bdev": "Nvme0n1", 00:25:46.760 "total_data_clusters": 476466, 00:25:46.760 "free_clusters": 471346, 00:25:46.760 "block_size": 512, 00:25:46.760 "cluster_size": 4194304 00:25:46.760 }, 00:25:46.760 { 00:25:46.760 "uuid": "f9e0d4ae-d47f-4c0c-8f66-ebd21c7fc035", 00:25:46.760 "name": "lvs_n_0", 00:25:46.760 "base_bdev": "4c847a31-035f-4cfe-8d7a-1b5dda3b6668", 00:25:46.760 "total_data_clusters": 5114, 00:25:46.760 "free_clusters": 5114, 00:25:46.760 "block_size": 512, 00:25:46.760 "cluster_size": 4194304 00:25:46.760 } 00:25:46.760 ]' 00:25:46.760 05:28:03 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="f9e0d4ae-d47f-4c0c-8f66-ebd21c7fc035") .free_clusters' 00:25:46.760 05:28:03 -- common/autotest_common.sh@1358 -- # fc=5114 00:25:46.760 05:28:03 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="f9e0d4ae-d47f-4c0c-8f66-ebd21c7fc035") .cluster_size' 00:25:46.760 05:28:03 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:46.760 05:28:03 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:25:46.760 05:28:03 -- common/autotest_common.sh@1363 -- # echo 20456 00:25:46.760 20456 00:25:46.760 05:28:03 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:46.760 05:28:03 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9e0d4ae-d47f-4c0c-8f66-ebd21c7fc035 lbd_nest_0 20456 00:25:47.020 05:28:03 -- host/perf.sh@88 -- # lb_nested_guid=3347f217-816f-40ee-8bcf-7b8047a78bbc 00:25:47.020 05:28:03 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.020 05:28:03 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:47.020 05:28:03 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3347f217-816f-40ee-8bcf-7b8047a78bbc 00:25:47.280 05:28:03 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:47.540 05:28:03 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:47.540 05:28:03 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:47.540 05:28:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:47.540 05:28:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:47.540 05:28:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:47.540 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.750 Initializing NVMe Controllers 00:25:59.750 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.750 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:59.750 Initialization complete. Launching workers. 00:25:59.750 ======================================================== 00:25:59.750 Latency(us) 00:25:59.750 Device Information : IOPS MiB/s Average min max 00:25:59.750 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5936.70 2.90 167.97 67.34 6401.86 00:25:59.750 ======================================================== 00:25:59.750 Total : 5936.70 2.90 167.97 67.34 6401.86 00:25:59.750 00:25:59.750 05:28:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:59.750 05:28:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:59.750 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.956 Initializing NVMe Controllers 00:26:11.956 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.956 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:11.956 Initialization complete. Launching workers. 00:26:11.956 ======================================================== 00:26:11.956 Latency(us) 00:26:11.956 Device Information : IOPS MiB/s Average min max 00:26:11.956 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2674.94 334.37 372.89 152.91 7175.95 00:26:11.956 ======================================================== 00:26:11.956 Total : 2674.94 334.37 372.89 152.91 7175.95 00:26:11.956 00:26:11.956 05:28:26 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:11.956 05:28:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:11.956 05:28:26 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:11.956 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.100 Initializing NVMe Controllers 00:26:22.100 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.100 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.100 Initialization complete. Launching workers. 00:26:22.100 ======================================================== 00:26:22.100 Latency(us) 00:26:22.100 Device Information : IOPS MiB/s Average min max 00:26:22.100 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12311.20 6.01 2598.96 911.40 6868.91 00:26:22.100 ======================================================== 00:26:22.100 Total : 12311.20 6.01 2598.96 911.40 6868.91 00:26:22.100 00:26:22.100 05:28:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:22.100 05:28:38 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:22.100 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.313 Initializing NVMe Controllers 00:26:34.313 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.313 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.313 Initialization complete. Launching workers. 00:26:34.313 ======================================================== 00:26:34.313 Latency(us) 00:26:34.313 Device Information : IOPS MiB/s Average min max 00:26:34.313 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3999.70 499.96 8003.82 3905.47 16047.75 00:26:34.313 ======================================================== 00:26:34.313 Total : 3999.70 499.96 8003.82 3905.47 16047.75 00:26:34.313 00:26:34.313 05:28:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:34.313 05:28:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:34.313 05:28:49 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:34.313 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.301 Initializing NVMe Controllers 00:26:44.301 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.301 Controller IO queue size 128, less than required. 00:26:44.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:44.301 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.301 Initialization complete. Launching workers. 00:26:44.301 ======================================================== 00:26:44.301 Latency(us) 00:26:44.301 Device Information : IOPS MiB/s Average min max 00:26:44.301 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19715.50 9.63 6494.50 1633.46 16355.22 00:26:44.301 ======================================================== 00:26:44.301 Total : 19715.50 9.63 6494.50 1633.46 16355.22 00:26:44.301 00:26:44.301 05:29:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.301 05:29:00 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:44.301 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.519 Initializing NVMe Controllers 00:26:56.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.519 Controller IO queue size 128, less than required. 00:26:56.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.519 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.519 Initialization complete. Launching workers. 00:26:56.519 ======================================================== 00:26:56.519 Latency(us) 00:26:56.519 Device Information : IOPS MiB/s Average min max 00:26:56.519 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11413.70 1426.71 11215.54 3351.60 23820.53 00:26:56.519 ======================================================== 00:26:56.519 Total : 11413.70 1426.71 11215.54 3351.60 23820.53 00:26:56.519 00:26:56.519 05:29:12 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.519 05:29:12 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3347f217-816f-40ee-8bcf-7b8047a78bbc 00:26:56.519 05:29:12 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:56.519 05:29:13 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4c847a31-035f-4cfe-8d7a-1b5dda3b6668 00:26:56.778 05:29:13 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:57.037 05:29:13 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:57.037 05:29:13 -- host/perf.sh@114 -- # nvmftestfini 00:26:57.037 05:29:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:57.037 05:29:13 -- nvmf/common.sh@116 -- # sync 00:26:57.037 05:29:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:26:57.037 05:29:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:26:57.037 05:29:13 -- nvmf/common.sh@119 -- # set +e 00:26:57.037 05:29:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:57.037 05:29:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:26:57.037 rmmod nvme_rdma 00:26:57.037 rmmod nvme_fabrics 00:26:57.037 05:29:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:57.037 05:29:13 -- nvmf/common.sh@123 -- # set -e 00:26:57.037 05:29:13 -- nvmf/common.sh@124 -- # return 0 00:26:57.037 05:29:13 -- nvmf/common.sh@477 -- # '[' -n 1921076 ']' 00:26:57.037 05:29:13 -- nvmf/common.sh@478 -- # killprocess 1921076 00:26:57.037 05:29:13 -- common/autotest_common.sh@936 -- # '[' -z 1921076 ']' 00:26:57.037 05:29:13 -- common/autotest_common.sh@940 -- # kill -0 1921076 00:26:57.037 05:29:13 -- common/autotest_common.sh@941 -- # uname 00:26:57.037 05:29:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:57.037 05:29:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1921076 00:26:57.037 05:29:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:57.037 05:29:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:57.037 05:29:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1921076' 00:26:57.037 killing process with pid 1921076 00:26:57.037 05:29:13 -- common/autotest_common.sh@955 -- # kill 1921076 00:26:57.037 05:29:13 -- common/autotest_common.sh@960 -- # wait 1921076 00:26:59.575 05:29:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:59.575 05:29:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:26:59.575 00:26:59.575 real 1m51.824s 00:26:59.575 user 7m2.660s 00:26:59.575 sys 0m7.174s 00:26:59.575 05:29:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:59.575 05:29:16 -- common/autotest_common.sh@10 -- # set +x 00:26:59.575 ************************************ 00:26:59.575 END TEST nvmf_perf 00:26:59.575 ************************************ 00:26:59.836 05:29:16 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:59.836 05:29:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:59.836 05:29:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:59.836 05:29:16 -- common/autotest_common.sh@10 -- # set +x 00:26:59.836 ************************************ 00:26:59.836 START TEST nvmf_fio_host 00:26:59.836 ************************************ 00:26:59.836 05:29:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:59.836 * Looking for test storage... 00:26:59.836 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:59.836 05:29:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:59.836 05:29:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:59.836 05:29:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:59.836 05:29:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:59.836 05:29:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:59.836 05:29:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:59.836 05:29:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:59.836 05:29:16 -- scripts/common.sh@335 -- # IFS=.-: 00:26:59.836 05:29:16 -- scripts/common.sh@335 -- # read -ra ver1 00:26:59.836 05:29:16 -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.836 05:29:16 -- scripts/common.sh@336 -- # read -ra ver2 00:26:59.836 05:29:16 -- scripts/common.sh@337 -- # local 'op=<' 00:26:59.836 05:29:16 -- scripts/common.sh@339 -- # ver1_l=2 00:26:59.836 05:29:16 -- scripts/common.sh@340 -- # ver2_l=1 00:26:59.836 05:29:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:59.836 05:29:16 -- scripts/common.sh@343 -- # case "$op" in 00:26:59.836 05:29:16 -- scripts/common.sh@344 -- # : 1 00:26:59.836 05:29:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:59.836 05:29:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.836 05:29:16 -- scripts/common.sh@364 -- # decimal 1 00:26:59.836 05:29:16 -- scripts/common.sh@352 -- # local d=1 00:26:59.836 05:29:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.836 05:29:16 -- scripts/common.sh@354 -- # echo 1 00:26:59.836 05:29:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:59.836 05:29:16 -- scripts/common.sh@365 -- # decimal 2 00:26:59.836 05:29:16 -- scripts/common.sh@352 -- # local d=2 00:26:59.836 05:29:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.836 05:29:16 -- scripts/common.sh@354 -- # echo 2 00:26:59.836 05:29:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:59.836 05:29:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:59.836 05:29:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:59.836 05:29:16 -- scripts/common.sh@367 -- # return 0 00:26:59.836 05:29:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.836 05:29:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:59.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.836 --rc genhtml_branch_coverage=1 00:26:59.836 --rc genhtml_function_coverage=1 00:26:59.836 --rc genhtml_legend=1 00:26:59.836 --rc geninfo_all_blocks=1 00:26:59.836 --rc geninfo_unexecuted_blocks=1 00:26:59.836 00:26:59.836 ' 00:26:59.836 05:29:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:59.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.836 --rc genhtml_branch_coverage=1 00:26:59.836 --rc genhtml_function_coverage=1 00:26:59.836 --rc genhtml_legend=1 00:26:59.836 --rc geninfo_all_blocks=1 00:26:59.836 --rc geninfo_unexecuted_blocks=1 00:26:59.836 00:26:59.836 ' 00:26:59.836 05:29:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:59.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.836 --rc genhtml_branch_coverage=1 00:26:59.836 --rc genhtml_function_coverage=1 00:26:59.836 --rc genhtml_legend=1 00:26:59.836 --rc geninfo_all_blocks=1 00:26:59.836 --rc geninfo_unexecuted_blocks=1 00:26:59.836 00:26:59.836 ' 00:26:59.836 05:29:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:59.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.836 --rc genhtml_branch_coverage=1 00:26:59.836 --rc genhtml_function_coverage=1 00:26:59.836 --rc genhtml_legend=1 00:26:59.836 --rc geninfo_all_blocks=1 00:26:59.836 --rc geninfo_unexecuted_blocks=1 00:26:59.836 00:26:59.836 ' 00:26:59.836 05:29:16 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:59.836 05:29:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.836 05:29:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.836 05:29:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.836 05:29:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.836 05:29:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.836 05:29:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.836 05:29:16 -- paths/export.sh@5 -- # export PATH 00:26:59.836 05:29:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.836 05:29:16 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.836 05:29:16 -- nvmf/common.sh@7 -- # uname -s 00:26:59.836 05:29:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.836 05:29:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.836 05:29:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.836 05:29:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.836 05:29:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.836 05:29:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.836 05:29:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.836 05:29:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.836 05:29:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.836 05:29:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.836 05:29:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:59.836 05:29:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:59.836 05:29:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.836 05:29:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.836 05:29:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.836 05:29:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:59.836 05:29:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.836 05:29:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.836 05:29:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.837 05:29:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.837 05:29:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.837 05:29:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.837 05:29:16 -- paths/export.sh@5 -- # export PATH 00:26:59.837 05:29:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.837 05:29:16 -- nvmf/common.sh@46 -- # : 0 00:26:59.837 05:29:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:59.837 05:29:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:59.837 05:29:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:59.837 05:29:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.837 05:29:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.837 05:29:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:59.837 05:29:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:59.837 05:29:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:59.837 05:29:16 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:59.837 05:29:16 -- host/fio.sh@14 -- # nvmftestinit 00:26:59.837 05:29:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:26:59.837 05:29:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.837 05:29:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:59.837 05:29:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:59.837 05:29:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:59.837 05:29:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.837 05:29:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.837 05:29:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.837 05:29:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:59.837 05:29:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:59.837 05:29:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:59.837 05:29:16 -- common/autotest_common.sh@10 -- # set +x 00:27:06.411 05:29:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:06.411 05:29:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:06.411 05:29:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:06.411 05:29:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:06.411 05:29:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:06.411 05:29:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:06.411 05:29:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:06.411 05:29:22 -- nvmf/common.sh@294 -- # net_devs=() 00:27:06.411 05:29:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:06.411 05:29:22 -- nvmf/common.sh@295 -- # e810=() 00:27:06.411 05:29:22 -- nvmf/common.sh@295 -- # local -ga e810 00:27:06.411 05:29:22 -- nvmf/common.sh@296 -- # x722=() 00:27:06.411 05:29:22 -- nvmf/common.sh@296 -- # local -ga x722 00:27:06.411 05:29:22 -- nvmf/common.sh@297 -- # mlx=() 00:27:06.411 05:29:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:06.411 05:29:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.411 05:29:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:06.411 05:29:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:06.411 05:29:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:06.411 05:29:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:06.411 05:29:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:06.411 05:29:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:06.411 05:29:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:06.411 05:29:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:06.411 05:29:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:06.411 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:06.411 05:29:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:06.411 05:29:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:06.411 05:29:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:06.411 05:29:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:06.411 05:29:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:06.412 05:29:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:06.412 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:06.412 05:29:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:06.412 05:29:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:06.412 05:29:22 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.412 05:29:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:06.412 05:29:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.412 05:29:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:06.412 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:06.412 05:29:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.412 05:29:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.412 05:29:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:06.412 05:29:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.412 05:29:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:06.412 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:06.412 05:29:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.412 05:29:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:06.412 05:29:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:06.412 05:29:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:06.412 05:29:22 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:06.412 05:29:22 -- nvmf/common.sh@57 -- # uname 00:27:06.412 05:29:22 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:06.412 05:29:22 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:06.412 05:29:22 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:06.412 05:29:22 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:06.412 05:29:22 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:06.412 05:29:22 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:06.412 05:29:22 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:06.412 05:29:22 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:06.412 05:29:22 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:06.412 05:29:22 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:06.412 05:29:22 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:06.412 05:29:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:06.412 05:29:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:06.412 05:29:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:06.412 05:29:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:06.412 05:29:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:06.412 05:29:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:06.412 05:29:22 -- nvmf/common.sh@104 -- # continue 2 00:27:06.412 05:29:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:06.412 05:29:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:06.412 05:29:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:06.412 05:29:22 -- nvmf/common.sh@104 -- # continue 2 00:27:06.412 05:29:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:06.412 05:29:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:06.412 05:29:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:06.412 05:29:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:06.412 05:29:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:06.412 05:29:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:06.672 05:29:22 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:06.672 05:29:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:06.672 05:29:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:06.672 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:06.672 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:06.672 altname enp217s0f0np0 00:27:06.672 altname ens818f0np0 00:27:06.672 inet 192.168.100.8/24 scope global mlx_0_0 00:27:06.672 valid_lft forever preferred_lft forever 00:27:06.672 05:29:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:06.672 05:29:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:06.672 05:29:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:06.672 05:29:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:06.672 05:29:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:06.672 05:29:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:06.672 05:29:22 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:06.672 05:29:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:06.672 05:29:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:06.672 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:06.672 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:06.672 altname enp217s0f1np1 00:27:06.672 altname ens818f1np1 00:27:06.672 inet 192.168.100.9/24 scope global mlx_0_1 00:27:06.672 valid_lft forever preferred_lft forever 00:27:06.672 05:29:23 -- nvmf/common.sh@410 -- # return 0 00:27:06.672 05:29:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:06.672 05:29:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:06.672 05:29:23 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:06.672 05:29:23 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:06.672 05:29:23 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:06.672 05:29:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:06.672 05:29:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:06.672 05:29:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:06.672 05:29:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:06.672 05:29:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:06.672 05:29:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:06.672 05:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:06.672 05:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:06.672 05:29:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:06.672 05:29:23 -- nvmf/common.sh@104 -- # continue 2 00:27:06.672 05:29:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:06.672 05:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:06.672 05:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:06.672 05:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:06.672 05:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:06.672 05:29:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:06.672 05:29:23 -- nvmf/common.sh@104 -- # continue 2 00:27:06.672 05:29:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:06.672 05:29:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:06.672 05:29:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:06.672 05:29:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:06.672 05:29:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:06.672 05:29:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:06.672 05:29:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:06.672 05:29:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:06.672 05:29:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:06.672 05:29:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:06.672 05:29:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:06.672 05:29:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:06.672 05:29:23 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:06.672 192.168.100.9' 00:27:06.672 05:29:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:06.672 192.168.100.9' 00:27:06.672 05:29:23 -- nvmf/common.sh@445 -- # head -n 1 00:27:06.672 05:29:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:06.672 05:29:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:06.672 192.168.100.9' 00:27:06.672 05:29:23 -- nvmf/common.sh@446 -- # head -n 1 00:27:06.672 05:29:23 -- nvmf/common.sh@446 -- # tail -n +2 00:27:06.672 05:29:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:06.672 05:29:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:06.672 05:29:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:06.672 05:29:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:06.672 05:29:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:06.672 05:29:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:06.672 05:29:23 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:06.672 05:29:23 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:06.672 05:29:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.672 05:29:23 -- common/autotest_common.sh@10 -- # set +x 00:27:06.672 05:29:23 -- host/fio.sh@24 -- # nvmfpid=1942106 00:27:06.672 05:29:23 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:06.672 05:29:23 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:06.672 05:29:23 -- host/fio.sh@28 -- # waitforlisten 1942106 00:27:06.672 05:29:23 -- common/autotest_common.sh@829 -- # '[' -z 1942106 ']' 00:27:06.672 05:29:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.672 05:29:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.672 05:29:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.672 05:29:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.672 05:29:23 -- common/autotest_common.sh@10 -- # set +x 00:27:06.672 [2024-11-19 05:29:23.170899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:06.672 [2024-11-19 05:29:23.170960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.672 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.932 [2024-11-19 05:29:23.245827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.932 [2024-11-19 05:29:23.284426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:06.932 [2024-11-19 05:29:23.284544] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.932 [2024-11-19 05:29:23.284570] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.932 [2024-11-19 05:29:23.284580] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.932 [2024-11-19 05:29:23.284630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.932 [2024-11-19 05:29:23.284731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.932 [2024-11-19 05:29:23.284818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.932 [2024-11-19 05:29:23.284819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.502 05:29:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.502 05:29:23 -- common/autotest_common.sh@862 -- # return 0 00:27:07.502 05:29:23 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:07.762 [2024-11-19 05:29:24.186089] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x551200/0x5556f0) succeed. 00:27:07.762 [2024-11-19 05:29:24.195239] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5527f0/0x596d90) succeed. 00:27:08.022 05:29:24 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:08.022 05:29:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.022 05:29:24 -- common/autotest_common.sh@10 -- # set +x 00:27:08.022 05:29:24 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:08.022 Malloc1 00:27:08.022 05:29:24 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:08.282 05:29:24 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:08.541 05:29:24 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:08.541 [2024-11-19 05:29:25.082686] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:08.801 05:29:25 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:08.801 05:29:25 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:08.801 05:29:25 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:08.801 05:29:25 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:08.801 05:29:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:08.801 05:29:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:08.801 05:29:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:08.801 05:29:25 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:08.801 05:29:25 -- common/autotest_common.sh@1330 -- # shift 00:27:08.801 05:29:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:08.801 05:29:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:08.801 05:29:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:08.801 05:29:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:08.801 05:29:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:08.801 05:29:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:08.801 05:29:25 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:08.801 05:29:25 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:09.377 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:09.377 fio-3.35 00:27:09.377 Starting 1 thread 00:27:09.377 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.913 00:27:11.913 test: (groupid=0, jobs=1): err= 0: pid=1942727: Tue Nov 19 05:29:27 2024 00:27:11.913 read: IOPS=19.0k, BW=74.2MiB/s (77.8MB/s)(149MiB/2004msec) 00:27:11.913 slat (nsec): min=1373, max=30422, avg=1496.01, stdev=484.00 00:27:11.913 clat (usec): min=2010, max=6121, avg=3344.93, stdev=73.66 00:27:11.913 lat (usec): min=2026, max=6122, avg=3346.42, stdev=73.60 00:27:11.913 clat percentiles (usec): 00:27:11.913 | 1.00th=[ 3294], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3326], 00:27:11.913 | 30.00th=[ 3326], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3359], 00:27:11.913 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3359], 00:27:11.913 | 99.00th=[ 3392], 99.50th=[ 3392], 99.90th=[ 4424], 99.95th=[ 5342], 00:27:11.913 | 99.99th=[ 6063] 00:27:11.913 bw ( KiB/s): min=74280, max=76616, per=100.00%, avg=75992.00, stdev=1142.03, samples=4 00:27:11.913 iops : min=18572, max=19152, avg=18998.00, stdev=284.15, samples=4 00:27:11.913 write: IOPS=19.0k, BW=74.2MiB/s (77.8MB/s)(149MiB/2004msec); 0 zone resets 00:27:11.914 slat (nsec): min=1409, max=18112, avg=1570.07, stdev=458.20 00:27:11.914 clat (usec): min=2024, max=6139, avg=3343.34, stdev=70.73 00:27:11.914 lat (usec): min=2035, max=6140, avg=3344.91, stdev=70.68 00:27:11.914 clat percentiles (usec): 00:27:11.914 | 1.00th=[ 3294], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3326], 00:27:11.914 | 30.00th=[ 3326], 40.00th=[ 3326], 50.00th=[ 3326], 60.00th=[ 3359], 00:27:11.914 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3359], 00:27:11.914 | 99.00th=[ 3392], 99.50th=[ 3392], 99.90th=[ 4015], 99.95th=[ 5276], 00:27:11.914 | 99.99th=[ 6128] 00:27:11.914 bw ( KiB/s): min=74360, max=76624, per=100.00%, avg=76006.00, stdev=1098.74, samples=4 00:27:11.914 iops : min=18590, max=19156, avg=19001.50, stdev=274.68, samples=4 00:27:11.914 lat (msec) : 4=99.89%, 10=0.11% 00:27:11.914 cpu : usr=99.50%, sys=0.10%, ctx=29, majf=0, minf=2 00:27:11.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:11.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:11.914 issued rwts: total=38066,38060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:11.914 00:27:11.914 Run status group 0 (all jobs): 00:27:11.914 READ: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=149MiB (156MB), run=2004-2004msec 00:27:11.914 WRITE: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=149MiB (156MB), run=2004-2004msec 00:27:11.914 05:29:27 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:11.914 05:29:27 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:11.914 05:29:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:11.914 05:29:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.914 05:29:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:11.914 05:29:27 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.914 05:29:27 -- common/autotest_common.sh@1330 -- # shift 00:27:11.914 05:29:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:11.914 05:29:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.914 05:29:27 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.914 05:29:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:11.914 05:29:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:11.914 05:29:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:11.914 05:29:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:11.914 05:29:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.914 05:29:28 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.914 05:29:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:11.914 05:29:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:11.914 05:29:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:11.914 05:29:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:11.914 05:29:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:11.914 05:29:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:11.914 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:11.914 fio-3.35 00:27:11.914 Starting 1 thread 00:27:11.914 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.462 00:27:14.462 test: (groupid=0, jobs=1): err= 0: pid=1943211: Tue Nov 19 05:29:30 2024 00:27:14.462 read: IOPS=15.1k, BW=236MiB/s (248MB/s)(463MiB/1959msec) 00:27:14.462 slat (nsec): min=2222, max=35222, avg=2559.14, stdev=876.71 00:27:14.462 clat (usec): min=430, max=8436, avg=1621.36, stdev=1300.92 00:27:14.462 lat (usec): min=433, max=8451, avg=1623.92, stdev=1301.21 00:27:14.462 clat percentiles (usec): 00:27:14.462 | 1.00th=[ 652], 5.00th=[ 742], 10.00th=[ 807], 20.00th=[ 881], 00:27:14.462 | 30.00th=[ 955], 40.00th=[ 1037], 50.00th=[ 1139], 60.00th=[ 1270], 00:27:14.462 | 70.00th=[ 1401], 80.00th=[ 1614], 90.00th=[ 4621], 95.00th=[ 4686], 00:27:14.462 | 99.00th=[ 6128], 99.50th=[ 6587], 99.90th=[ 7111], 99.95th=[ 7242], 00:27:14.462 | 99.99th=[ 8356] 00:27:14.462 bw ( KiB/s): min=103264, max=124192, per=48.03%, avg=116123.75, stdev=9093.67, samples=4 00:27:14.462 iops : min= 6454, max= 7762, avg=7257.50, stdev=568.21, samples=4 00:27:14.462 write: IOPS=8590, BW=134MiB/s (141MB/s)(236MiB/1755msec); 0 zone resets 00:27:14.462 slat (usec): min=26, max=108, avg=28.85, stdev= 5.10 00:27:14.462 clat (usec): min=3843, max=18259, avg=11929.57, stdev=1702.90 00:27:14.462 lat (usec): min=3871, max=18291, avg=11958.42, stdev=1702.68 00:27:14.462 clat percentiles (usec): 00:27:14.462 | 1.00th=[ 7111], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:27:14.462 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:27:14.462 | 70.00th=[12780], 80.00th=[13304], 90.00th=[13960], 95.00th=[14615], 00:27:14.462 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:27:14.462 | 99.99th=[18220] 00:27:14.462 bw ( KiB/s): min=110400, max=129504, per=87.71%, avg=120555.25, stdev=7834.76, samples=4 00:27:14.462 iops : min= 6900, max= 8094, avg=7534.50, stdev=489.64, samples=4 00:27:14.462 lat (usec) : 500=0.01%, 750=3.55%, 1000=20.32% 00:27:14.462 lat (msec) : 2=32.66%, 4=2.11%, 10=11.31%, 20=30.04% 00:27:14.462 cpu : usr=95.86%, sys=2.24%, ctx=211, majf=0, minf=1 00:27:14.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:14.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:14.462 issued rwts: total=29600,15077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:14.462 00:27:14.462 Run status group 0 (all jobs): 00:27:14.462 READ: bw=236MiB/s (248MB/s), 236MiB/s-236MiB/s (248MB/s-248MB/s), io=463MiB (485MB), run=1959-1959msec 00:27:14.462 WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=236MiB (247MB), run=1755-1755msec 00:27:14.462 05:29:30 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.462 05:29:30 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:14.462 05:29:30 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:14.462 05:29:30 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:14.462 05:29:30 -- common/autotest_common.sh@1508 -- # bdfs=() 00:27:14.462 05:29:30 -- common/autotest_common.sh@1508 -- # local bdfs 00:27:14.462 05:29:30 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:14.462 05:29:30 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:14.462 05:29:30 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:27:14.462 05:29:30 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:27:14.462 05:29:30 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:27:14.462 05:29:30 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:27:17.749 Nvme0n1 00:27:17.749 05:29:34 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:23.018 05:29:39 -- host/fio.sh@53 -- # ls_guid=ef571d78-b86f-43c8-8801-7d2776791c80 00:27:23.018 05:29:39 -- host/fio.sh@54 -- # get_lvs_free_mb ef571d78-b86f-43c8-8801-7d2776791c80 00:27:23.018 05:29:39 -- common/autotest_common.sh@1353 -- # local lvs_uuid=ef571d78-b86f-43c8-8801-7d2776791c80 00:27:23.018 05:29:39 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:23.018 05:29:39 -- common/autotest_common.sh@1355 -- # local fc 00:27:23.018 05:29:39 -- common/autotest_common.sh@1356 -- # local cs 00:27:23.018 05:29:39 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:23.276 05:29:39 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:23.276 { 00:27:23.276 "uuid": "ef571d78-b86f-43c8-8801-7d2776791c80", 00:27:23.276 "name": "lvs_0", 00:27:23.276 "base_bdev": "Nvme0n1", 00:27:23.276 "total_data_clusters": 1862, 00:27:23.276 "free_clusters": 1862, 00:27:23.276 "block_size": 512, 00:27:23.276 "cluster_size": 1073741824 00:27:23.276 } 00:27:23.276 ]' 00:27:23.276 05:29:39 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ef571d78-b86f-43c8-8801-7d2776791c80") .free_clusters' 00:27:23.276 05:29:39 -- common/autotest_common.sh@1358 -- # fc=1862 00:27:23.276 05:29:39 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ef571d78-b86f-43c8-8801-7d2776791c80") .cluster_size' 00:27:23.276 05:29:39 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:27:23.276 05:29:39 -- common/autotest_common.sh@1362 -- # free_mb=1906688 00:27:23.276 05:29:39 -- common/autotest_common.sh@1363 -- # echo 1906688 00:27:23.276 1906688 00:27:23.276 05:29:39 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:27:23.843 ad10a0c8-3079-43ce-a34d-9b988d96cacd 00:27:23.843 05:29:40 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:24.102 05:29:40 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:24.523 05:29:40 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:24.523 05:29:40 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:24.523 05:29:40 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:24.523 05:29:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:24.523 05:29:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.523 05:29:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:24.523 05:29:40 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.523 05:29:40 -- common/autotest_common.sh@1330 -- # shift 00:27:24.523 05:29:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:24.523 05:29:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.523 05:29:40 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.523 05:29:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:24.523 05:29:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:24.523 05:29:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:24.523 05:29:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:24.523 05:29:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.523 05:29:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:24.524 05:29:40 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.524 05:29:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:24.524 05:29:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:24.524 05:29:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:24.524 05:29:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:24.524 05:29:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:24.782 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:24.782 fio-3.35 00:27:24.782 Starting 1 thread 00:27:24.782 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.316 00:27:27.316 test: (groupid=0, jobs=1): err= 0: pid=1945546: Tue Nov 19 05:29:43 2024 00:27:27.316 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(79.6MiB/2004msec) 00:27:27.316 slat (nsec): min=1331, max=16915, avg=1438.67, stdev=242.18 00:27:27.316 clat (usec): min=201, max=358907, avg=6248.24, stdev=19925.37 00:27:27.316 lat (usec): min=202, max=358910, avg=6249.68, stdev=19925.40 00:27:27.316 clat percentiles (msec): 00:27:27.316 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:27.316 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:27.316 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:27.316 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 359], 99.95th=[ 359], 00:27:27.316 | 99.99th=[ 359] 00:27:27.316 bw ( KiB/s): min=12942, max=49968, per=99.86%, avg=40611.50, stdev=18447.13, samples=4 00:27:27.316 iops : min= 3235, max=12492, avg=10152.75, stdev=4612.05, samples=4 00:27:27.316 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(79.7MiB/2004msec); 0 zone resets 00:27:27.316 slat (nsec): min=1370, max=17145, avg=1553.42, stdev=329.18 00:27:27.316 clat (usec): min=183, max=359226, avg=6215.14, stdev=19360.11 00:27:27.316 lat (usec): min=185, max=359230, avg=6216.70, stdev=19360.17 00:27:27.316 clat percentiles (msec): 00:27:27.316 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:27.316 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:27.316 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:27.316 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 359], 99.95th=[ 359], 00:27:27.316 | 99.99th=[ 359] 00:27:27.316 bw ( KiB/s): min=13381, max=49872, per=99.88%, avg=40669.25, stdev=18192.38, samples=4 00:27:27.316 iops : min= 3345, max=12468, avg=10167.25, stdev=4548.22, samples=4 00:27:27.316 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:27:27.316 lat (msec) : 2=0.04%, 4=0.26%, 10=99.34%, 500=0.31% 00:27:27.316 cpu : usr=99.55%, sys=0.10%, ctx=15, majf=0, minf=2 00:27:27.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:27.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:27.316 issued rwts: total=20374,20400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:27.316 00:27:27.316 Run status group 0 (all jobs): 00:27:27.316 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=79.6MiB (83.5MB), run=2004-2004msec 00:27:27.316 WRITE: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=79.7MiB (83.6MB), run=2004-2004msec 00:27:27.316 05:29:43 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:27.316 05:29:43 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:28.691 05:29:45 -- host/fio.sh@64 -- # ls_nested_guid=ce130619-2f34-4f30-b960-7967194b8f67 00:27:28.691 05:29:45 -- host/fio.sh@65 -- # get_lvs_free_mb ce130619-2f34-4f30-b960-7967194b8f67 00:27:28.691 05:29:45 -- common/autotest_common.sh@1353 -- # local lvs_uuid=ce130619-2f34-4f30-b960-7967194b8f67 00:27:28.691 05:29:45 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:28.691 05:29:45 -- common/autotest_common.sh@1355 -- # local fc 00:27:28.691 05:29:45 -- common/autotest_common.sh@1356 -- # local cs 00:27:28.691 05:29:45 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:28.691 05:29:45 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:28.691 { 00:27:28.691 "uuid": "ef571d78-b86f-43c8-8801-7d2776791c80", 00:27:28.691 "name": "lvs_0", 00:27:28.691 "base_bdev": "Nvme0n1", 00:27:28.691 "total_data_clusters": 1862, 00:27:28.691 "free_clusters": 0, 00:27:28.691 "block_size": 512, 00:27:28.691 "cluster_size": 1073741824 00:27:28.691 }, 00:27:28.691 { 00:27:28.691 "uuid": "ce130619-2f34-4f30-b960-7967194b8f67", 00:27:28.691 "name": "lvs_n_0", 00:27:28.691 "base_bdev": "ad10a0c8-3079-43ce-a34d-9b988d96cacd", 00:27:28.692 "total_data_clusters": 476206, 00:27:28.692 "free_clusters": 476206, 00:27:28.692 "block_size": 512, 00:27:28.692 "cluster_size": 4194304 00:27:28.692 } 00:27:28.692 ]' 00:27:28.692 05:29:45 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ce130619-2f34-4f30-b960-7967194b8f67") .free_clusters' 00:27:28.692 05:29:45 -- common/autotest_common.sh@1358 -- # fc=476206 00:27:28.692 05:29:45 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ce130619-2f34-4f30-b960-7967194b8f67") .cluster_size' 00:27:28.950 05:29:45 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:28.950 05:29:45 -- common/autotest_common.sh@1362 -- # free_mb=1904824 00:27:28.950 05:29:45 -- common/autotest_common.sh@1363 -- # echo 1904824 00:27:28.950 1904824 00:27:28.950 05:29:45 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:27:29.885 8c0fcdc3-34ec-4d73-a070-4000fd14ebb5 00:27:29.885 05:29:46 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:29.885 05:29:46 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:30.143 05:29:46 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:30.401 05:29:46 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.401 05:29:46 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.401 05:29:46 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:30.401 05:29:46 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.401 05:29:46 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:30.401 05:29:46 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.401 05:29:46 -- common/autotest_common.sh@1330 -- # shift 00:27:30.401 05:29:46 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:30.401 05:29:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:30.401 05:29:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:30.401 05:29:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:30.401 05:29:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:30.401 05:29:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:30.401 05:29:46 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:30.401 05:29:46 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.661 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:30.661 fio-3.35 00:27:30.661 Starting 1 thread 00:27:30.661 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.193 00:27:33.193 test: (groupid=0, jobs=1): err= 0: pid=1946741: Tue Nov 19 05:29:49 2024 00:27:33.193 read: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(84.7MiB/2005msec) 00:27:33.193 slat (nsec): min=1347, max=17247, avg=1465.45, stdev=253.60 00:27:33.193 clat (usec): min=2395, max=10546, avg=5852.65, stdev=170.97 00:27:33.193 lat (usec): min=2401, max=10548, avg=5854.12, stdev=170.94 00:27:33.193 clat percentiles (usec): 00:27:33.193 | 1.00th=[ 5735], 5.00th=[ 5800], 10.00th=[ 5800], 20.00th=[ 5800], 00:27:33.193 | 30.00th=[ 5866], 40.00th=[ 5866], 50.00th=[ 5866], 60.00th=[ 5866], 00:27:33.193 | 70.00th=[ 5866], 80.00th=[ 5866], 90.00th=[ 5866], 95.00th=[ 5932], 00:27:33.193 | 99.00th=[ 5997], 99.50th=[ 5997], 99.90th=[ 9110], 99.95th=[ 9372], 00:27:33.193 | 99.99th=[10552] 00:27:33.193 bw ( KiB/s): min=41416, max=44048, per=99.93%, avg=43206.00, stdev=1206.74, samples=4 00:27:33.193 iops : min=10354, max=11012, avg=10801.50, stdev=301.69, samples=4 00:27:33.194 write: IOPS=10.8k, BW=42.1MiB/s (44.2MB/s)(84.5MiB/2005msec); 0 zone resets 00:27:33.194 slat (nsec): min=1387, max=17408, avg=1589.57, stdev=328.07 00:27:33.194 clat (usec): min=3790, max=10527, avg=5873.32, stdev=152.87 00:27:33.194 lat (usec): min=3794, max=10529, avg=5874.91, stdev=152.85 00:27:33.194 clat percentiles (usec): 00:27:33.194 | 1.00th=[ 5800], 5.00th=[ 5800], 10.00th=[ 5800], 20.00th=[ 5866], 00:27:33.194 | 30.00th=[ 5866], 40.00th=[ 5866], 50.00th=[ 5866], 60.00th=[ 5866], 00:27:33.194 | 70.00th=[ 5866], 80.00th=[ 5932], 90.00th=[ 5932], 95.00th=[ 5932], 00:27:33.194 | 99.00th=[ 5997], 99.50th=[ 5997], 99.90th=[ 7898], 99.95th=[ 9372], 00:27:33.194 | 99.99th=[10552] 00:27:33.194 bw ( KiB/s): min=41808, max=43736, per=99.99%, avg=43126.00, stdev=894.67, samples=4 00:27:33.194 iops : min=10452, max=10934, avg=10781.50, stdev=223.67, samples=4 00:27:33.194 lat (msec) : 4=0.08%, 10=99.90%, 20=0.02% 00:27:33.194 cpu : usr=99.55%, sys=0.10%, ctx=15, majf=0, minf=2 00:27:33.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:33.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:33.194 issued rwts: total=21673,21620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:33.194 00:27:33.194 Run status group 0 (all jobs): 00:27:33.194 READ: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=84.7MiB (88.8MB), run=2005-2005msec 00:27:33.194 WRITE: bw=42.1MiB/s (44.2MB/s), 42.1MiB/s-42.1MiB/s (44.2MB/s-44.2MB/s), io=84.5MiB (88.6MB), run=2005-2005msec 00:27:33.194 05:29:49 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:33.194 05:29:49 -- host/fio.sh@74 -- # sync 00:27:33.194 05:29:49 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:41.308 05:29:56 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:41.308 05:29:57 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:46.573 05:30:02 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:46.573 05:30:02 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:49.858 05:30:05 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:49.858 05:30:05 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:49.858 05:30:05 -- host/fio.sh@86 -- # nvmftestfini 00:27:49.858 05:30:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:49.858 05:30:05 -- nvmf/common.sh@116 -- # sync 00:27:49.858 05:30:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:49.858 05:30:05 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:49.858 05:30:05 -- nvmf/common.sh@119 -- # set +e 00:27:49.858 05:30:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:49.858 05:30:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:49.858 rmmod nvme_rdma 00:27:49.858 rmmod nvme_fabrics 00:27:49.858 05:30:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:49.858 05:30:06 -- nvmf/common.sh@123 -- # set -e 00:27:49.858 05:30:06 -- nvmf/common.sh@124 -- # return 0 00:27:49.858 05:30:06 -- nvmf/common.sh@477 -- # '[' -n 1942106 ']' 00:27:49.858 05:30:06 -- nvmf/common.sh@478 -- # killprocess 1942106 00:27:49.858 05:30:06 -- common/autotest_common.sh@936 -- # '[' -z 1942106 ']' 00:27:49.858 05:30:06 -- common/autotest_common.sh@940 -- # kill -0 1942106 00:27:49.858 05:30:06 -- common/autotest_common.sh@941 -- # uname 00:27:49.858 05:30:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:49.858 05:30:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1942106 00:27:49.858 05:30:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:49.858 05:30:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:49.858 05:30:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1942106' 00:27:49.858 killing process with pid 1942106 00:27:49.858 05:30:06 -- common/autotest_common.sh@955 -- # kill 1942106 00:27:49.858 05:30:06 -- common/autotest_common.sh@960 -- # wait 1942106 00:27:49.858 05:30:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:49.858 05:30:06 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:49.858 00:27:49.858 real 0m50.194s 00:27:49.858 user 3m38.971s 00:27:49.858 sys 0m7.715s 00:27:49.858 05:30:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:49.858 05:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:49.858 ************************************ 00:27:49.858 END TEST nvmf_fio_host 00:27:49.858 ************************************ 00:27:49.858 05:30:06 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:49.858 05:30:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:49.858 05:30:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.858 05:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.118 ************************************ 00:27:50.118 START TEST nvmf_failover 00:27:50.118 ************************************ 00:27:50.118 05:30:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:50.118 * Looking for test storage... 00:27:50.118 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:50.118 05:30:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:50.118 05:30:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:50.118 05:30:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:50.118 05:30:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:50.118 05:30:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:50.118 05:30:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:50.118 05:30:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:50.118 05:30:06 -- scripts/common.sh@335 -- # IFS=.-: 00:27:50.118 05:30:06 -- scripts/common.sh@335 -- # read -ra ver1 00:27:50.118 05:30:06 -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.118 05:30:06 -- scripts/common.sh@336 -- # read -ra ver2 00:27:50.118 05:30:06 -- scripts/common.sh@337 -- # local 'op=<' 00:27:50.118 05:30:06 -- scripts/common.sh@339 -- # ver1_l=2 00:27:50.118 05:30:06 -- scripts/common.sh@340 -- # ver2_l=1 00:27:50.118 05:30:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:50.118 05:30:06 -- scripts/common.sh@343 -- # case "$op" in 00:27:50.118 05:30:06 -- scripts/common.sh@344 -- # : 1 00:27:50.118 05:30:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:50.118 05:30:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.118 05:30:06 -- scripts/common.sh@364 -- # decimal 1 00:27:50.118 05:30:06 -- scripts/common.sh@352 -- # local d=1 00:27:50.118 05:30:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.118 05:30:06 -- scripts/common.sh@354 -- # echo 1 00:27:50.118 05:30:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:50.118 05:30:06 -- scripts/common.sh@365 -- # decimal 2 00:27:50.118 05:30:06 -- scripts/common.sh@352 -- # local d=2 00:27:50.118 05:30:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.118 05:30:06 -- scripts/common.sh@354 -- # echo 2 00:27:50.118 05:30:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:50.118 05:30:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:50.118 05:30:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:50.118 05:30:06 -- scripts/common.sh@367 -- # return 0 00:27:50.118 05:30:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.118 05:30:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:50.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.118 --rc genhtml_branch_coverage=1 00:27:50.118 --rc genhtml_function_coverage=1 00:27:50.118 --rc genhtml_legend=1 00:27:50.118 --rc geninfo_all_blocks=1 00:27:50.118 --rc geninfo_unexecuted_blocks=1 00:27:50.118 00:27:50.118 ' 00:27:50.118 05:30:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:50.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.118 --rc genhtml_branch_coverage=1 00:27:50.118 --rc genhtml_function_coverage=1 00:27:50.118 --rc genhtml_legend=1 00:27:50.118 --rc geninfo_all_blocks=1 00:27:50.118 --rc geninfo_unexecuted_blocks=1 00:27:50.118 00:27:50.118 ' 00:27:50.118 05:30:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:50.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.118 --rc genhtml_branch_coverage=1 00:27:50.118 --rc genhtml_function_coverage=1 00:27:50.118 --rc genhtml_legend=1 00:27:50.118 --rc geninfo_all_blocks=1 00:27:50.118 --rc geninfo_unexecuted_blocks=1 00:27:50.118 00:27:50.118 ' 00:27:50.118 05:30:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:50.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.118 --rc genhtml_branch_coverage=1 00:27:50.118 --rc genhtml_function_coverage=1 00:27:50.118 --rc genhtml_legend=1 00:27:50.118 --rc geninfo_all_blocks=1 00:27:50.118 --rc geninfo_unexecuted_blocks=1 00:27:50.118 00:27:50.118 ' 00:27:50.118 05:30:06 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.118 05:30:06 -- nvmf/common.sh@7 -- # uname -s 00:27:50.118 05:30:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.118 05:30:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.118 05:30:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.118 05:30:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.118 05:30:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.118 05:30:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.118 05:30:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.118 05:30:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.118 05:30:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.118 05:30:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.118 05:30:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:50.118 05:30:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:50.118 05:30:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.118 05:30:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.118 05:30:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.118 05:30:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:50.118 05:30:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.118 05:30:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.118 05:30:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.118 05:30:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.118 05:30:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.118 05:30:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.118 05:30:06 -- paths/export.sh@5 -- # export PATH 00:27:50.118 05:30:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.118 05:30:06 -- nvmf/common.sh@46 -- # : 0 00:27:50.118 05:30:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:50.118 05:30:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:50.118 05:30:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:50.118 05:30:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.118 05:30:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.118 05:30:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:50.118 05:30:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:50.118 05:30:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:50.118 05:30:06 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.118 05:30:06 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.118 05:30:06 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:50.118 05:30:06 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:50.118 05:30:06 -- host/failover.sh@18 -- # nvmftestinit 00:27:50.118 05:30:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:50.118 05:30:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.118 05:30:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:50.118 05:30:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:50.118 05:30:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:50.118 05:30:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.118 05:30:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.118 05:30:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.118 05:30:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:50.118 05:30:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:50.119 05:30:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:50.119 05:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:56.695 05:30:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:56.695 05:30:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:56.695 05:30:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:56.695 05:30:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:56.695 05:30:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:56.695 05:30:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:56.695 05:30:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:56.695 05:30:13 -- nvmf/common.sh@294 -- # net_devs=() 00:27:56.695 05:30:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:56.695 05:30:13 -- nvmf/common.sh@295 -- # e810=() 00:27:56.695 05:30:13 -- nvmf/common.sh@295 -- # local -ga e810 00:27:56.695 05:30:13 -- nvmf/common.sh@296 -- # x722=() 00:27:56.695 05:30:13 -- nvmf/common.sh@296 -- # local -ga x722 00:27:56.695 05:30:13 -- nvmf/common.sh@297 -- # mlx=() 00:27:56.695 05:30:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:56.695 05:30:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.695 05:30:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:56.695 05:30:13 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:56.695 05:30:13 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:56.695 05:30:13 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:56.695 05:30:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:56.695 05:30:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.695 05:30:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:56.695 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:56.695 05:30:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:56.695 05:30:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.695 05:30:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:56.695 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:56.695 05:30:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:56.695 05:30:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:56.695 05:30:13 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.695 05:30:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.695 05:30:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.695 05:30:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.695 05:30:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:56.695 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:56.695 05:30:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.695 05:30:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.695 05:30:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.695 05:30:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.695 05:30:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.695 05:30:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:56.695 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:56.695 05:30:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.695 05:30:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:56.695 05:30:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:56.695 05:30:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:56.695 05:30:13 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:56.695 05:30:13 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:56.695 05:30:13 -- nvmf/common.sh@57 -- # uname 00:27:56.695 05:30:13 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:56.695 05:30:13 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:56.696 05:30:13 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:56.696 05:30:13 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:56.696 05:30:13 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:56.696 05:30:13 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:56.696 05:30:13 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:56.696 05:30:13 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:56.696 05:30:13 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:56.696 05:30:13 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:56.696 05:30:13 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:56.696 05:30:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:56.696 05:30:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:56.696 05:30:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:56.696 05:30:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:56.696 05:30:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:56.696 05:30:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@104 -- # continue 2 00:27:56.696 05:30:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@104 -- # continue 2 00:27:56.696 05:30:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:56.696 05:30:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.696 05:30:13 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:56.696 05:30:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:56.696 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:56.696 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:56.696 altname enp217s0f0np0 00:27:56.696 altname ens818f0np0 00:27:56.696 inet 192.168.100.8/24 scope global mlx_0_0 00:27:56.696 valid_lft forever preferred_lft forever 00:27:56.696 05:30:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:56.696 05:30:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.696 05:30:13 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:56.696 05:30:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:56.696 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:56.696 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:56.696 altname enp217s0f1np1 00:27:56.696 altname ens818f1np1 00:27:56.696 inet 192.168.100.9/24 scope global mlx_0_1 00:27:56.696 valid_lft forever preferred_lft forever 00:27:56.696 05:30:13 -- nvmf/common.sh@410 -- # return 0 00:27:56.696 05:30:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:56.696 05:30:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:56.696 05:30:13 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:56.696 05:30:13 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:56.696 05:30:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:56.696 05:30:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:56.696 05:30:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:56.696 05:30:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:56.696 05:30:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:56.696 05:30:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@104 -- # continue 2 00:27:56.696 05:30:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.696 05:30:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:56.696 05:30:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@104 -- # continue 2 00:27:56.696 05:30:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:56.696 05:30:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.696 05:30:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:56.696 05:30:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:56.696 05:30:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.696 05:30:13 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:56.696 192.168.100.9' 00:27:56.696 05:30:13 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:56.696 192.168.100.9' 00:27:56.696 05:30:13 -- nvmf/common.sh@445 -- # head -n 1 00:27:56.696 05:30:13 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:56.696 05:30:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:56.696 192.168.100.9' 00:27:56.954 05:30:13 -- nvmf/common.sh@446 -- # tail -n +2 00:27:56.954 05:30:13 -- nvmf/common.sh@446 -- # head -n 1 00:27:56.954 05:30:13 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:56.954 05:30:13 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:56.954 05:30:13 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:56.954 05:30:13 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:56.954 05:30:13 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:56.954 05:30:13 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:56.954 05:30:13 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:56.954 05:30:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:56.954 05:30:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.954 05:30:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.954 05:30:13 -- nvmf/common.sh@469 -- # nvmfpid=1953693 00:27:56.954 05:30:13 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:56.954 05:30:13 -- nvmf/common.sh@470 -- # waitforlisten 1953693 00:27:56.954 05:30:13 -- common/autotest_common.sh@829 -- # '[' -z 1953693 ']' 00:27:56.954 05:30:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.954 05:30:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.954 05:30:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.954 05:30:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.954 05:30:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.954 [2024-11-19 05:30:13.331004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:56.954 [2024-11-19 05:30:13.331054] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.954 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.954 [2024-11-19 05:30:13.400227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.954 [2024-11-19 05:30:13.437670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:56.954 [2024-11-19 05:30:13.437780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.954 [2024-11-19 05:30:13.437790] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.954 [2024-11-19 05:30:13.437802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.954 [2024-11-19 05:30:13.437902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.954 [2024-11-19 05:30:13.437985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.954 [2024-11-19 05:30:13.437987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.889 05:30:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.889 05:30:14 -- common/autotest_common.sh@862 -- # return 0 00:27:57.889 05:30:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:57.889 05:30:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.889 05:30:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.889 05:30:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.889 05:30:14 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:57.889 [2024-11-19 05:30:14.382580] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xab29c0/0xab6eb0) succeed. 00:27:57.889 [2024-11-19 05:30:14.391558] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xab3f10/0xaf8550) succeed. 00:27:58.147 05:30:14 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:58.147 Malloc0 00:27:58.405 05:30:14 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.405 05:30:14 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.664 05:30:15 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:58.664 [2024-11-19 05:30:15.225047] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:58.922 05:30:15 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:58.922 [2024-11-19 05:30:15.417420] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:58.922 05:30:15 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:59.181 [2024-11-19 05:30:15.610078] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:59.181 05:30:15 -- host/failover.sh@31 -- # bdevperf_pid=1954253 00:27:59.181 05:30:15 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:59.181 05:30:15 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.181 05:30:15 -- host/failover.sh@34 -- # waitforlisten 1954253 /var/tmp/bdevperf.sock 00:27:59.181 05:30:15 -- common/autotest_common.sh@829 -- # '[' -z 1954253 ']' 00:27:59.181 05:30:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.181 05:30:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:59.181 05:30:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.181 05:30:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:59.181 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:28:00.116 05:30:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.116 05:30:16 -- common/autotest_common.sh@862 -- # return 0 00:28:00.116 05:30:16 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.374 NVMe0n1 00:28:00.374 05:30:16 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.633 00:28:00.633 05:30:17 -- host/failover.sh@39 -- # run_test_pid=1954423 00:28:00.633 05:30:17 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:00.633 05:30:17 -- host/failover.sh@41 -- # sleep 1 00:28:01.567 05:30:18 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:01.825 05:30:18 -- host/failover.sh@45 -- # sleep 3 00:28:05.108 05:30:21 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:05.108 00:28:05.108 05:30:21 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:05.108 05:30:21 -- host/failover.sh@50 -- # sleep 3 00:28:08.392 05:30:24 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:08.392 [2024-11-19 05:30:24.817201] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:08.392 05:30:24 -- host/failover.sh@55 -- # sleep 1 00:28:09.327 05:30:25 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:09.585 05:30:26 -- host/failover.sh@59 -- # wait 1954423 00:28:16.241 0 00:28:16.241 05:30:32 -- host/failover.sh@61 -- # killprocess 1954253 00:28:16.241 05:30:32 -- common/autotest_common.sh@936 -- # '[' -z 1954253 ']' 00:28:16.241 05:30:32 -- common/autotest_common.sh@940 -- # kill -0 1954253 00:28:16.241 05:30:32 -- common/autotest_common.sh@941 -- # uname 00:28:16.241 05:30:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:16.241 05:30:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1954253 00:28:16.241 05:30:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:16.241 05:30:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:16.241 05:30:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1954253' 00:28:16.241 killing process with pid 1954253 00:28:16.241 05:30:32 -- common/autotest_common.sh@955 -- # kill 1954253 00:28:16.241 05:30:32 -- common/autotest_common.sh@960 -- # wait 1954253 00:28:16.241 05:30:32 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:16.241 [2024-11-19 05:30:15.680897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:16.241 [2024-11-19 05:30:15.680953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954253 ] 00:28:16.241 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.241 [2024-11-19 05:30:15.750664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.241 [2024-11-19 05:30:15.787324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.241 Running I/O for 15 seconds... 00:28:16.241 [2024-11-19 05:30:19.185785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.241 [2024-11-19 05:30:19.185828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.241 [2024-11-19 05:30:19.185846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.241 [2024-11-19 05:30:19.185856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.241 [2024-11-19 05:30:19.185868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.185878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.185889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.185898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.185909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.185918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.185929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.185938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.185949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.185958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.185968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.185977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.185988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.185997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182700 00:28:16.242 [2024-11-19 05:30:19.186513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.242 [2024-11-19 05:30:19.186576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184200 00:28:16.242 [2024-11-19 05:30:19.186596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.242 [2024-11-19 05:30:19.186606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.186851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.186871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.186908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.186987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.186998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.187006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.187025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.187045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.187064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.187084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.187104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.187124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.187144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.187163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.187182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.187203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.187223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.187242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.187261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.187281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184200 00:28:16.243 [2024-11-19 05:30:19.187300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.243 [2024-11-19 05:30:19.187319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.243 [2024-11-19 05:30:19.187330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182700 00:28:16.243 [2024-11-19 05:30:19.187339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x182700 00:28:16.244 [2024-11-19 05:30:19.187948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.187969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.187988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.187998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.188007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.188017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.188027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.188037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.244 [2024-11-19 05:30:19.188046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.188057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184200 00:28:16.244 [2024-11-19 05:30:19.188066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.244 [2024-11-19 05:30:19.188077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:19.188086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182700 00:28:16.245 [2024-11-19 05:30:19.188105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:19.188126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182700 00:28:16.245 [2024-11-19 05:30:19.188146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:19.188165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:19.188184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:19.188203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:19.188223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182700 00:28:16.245 [2024-11-19 05:30:19.188242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:19.188261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:19.188281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182700 00:28:16.245 [2024-11-19 05:30:19.188300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.188311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:19.188320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.190211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:16.245 [2024-11-19 05:30:19.190225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:16.245 [2024-11-19 05:30:19.190238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90352 len:8 PRP1 0x0 PRP2 0x0 00:28:16.245 [2024-11-19 05:30:19.190247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:19.190288] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:16.245 [2024-11-19 05:30:19.190305] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:16.245 [2024-11-19 05:30:19.190316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.245 [2024-11-19 05:30:19.192133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.245 [2024-11-19 05:30:19.206518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.245 [2024-11-19 05:30:19.235231] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.245 [2024-11-19 05:30:22.621517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:22.621605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:22.621626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:22.621646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:22.621705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:22.621749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:22.621769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:22.621848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184200 00:28:16.245 [2024-11-19 05:30:22.621887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182e00 00:28:16.245 [2024-11-19 05:30:22.621907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:22.621927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.245 [2024-11-19 05:30:22.621937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.245 [2024-11-19 05:30:22.621946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.621962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.621971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.621982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.621991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.622188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.622207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.622362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.622382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.622422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.622441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182e00 00:28:16.246 [2024-11-19 05:30:22.622519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.246 [2024-11-19 05:30:22.622543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.246 [2024-11-19 05:30:22.622554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184200 00:28:16.246 [2024-11-19 05:30:22.622563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.622583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.622602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.622621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.622641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.622662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.622682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.622701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.622759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.622799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.622818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.622857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.622917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.622936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.622986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.622995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.623053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.623092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.623113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182e00 00:28:16.247 [2024-11-19 05:30:22.623132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184200 00:28:16.247 [2024-11-19 05:30:22.623170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.247 [2024-11-19 05:30:22.623287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.247 [2024-11-19 05:30:22.623297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.623959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182e00 00:28:16.248 [2024-11-19 05:30:22.623979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.623990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.623999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.624009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.248 [2024-11-19 05:30:22.624018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.248 [2024-11-19 05:30:22.624028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184200 00:28:16.248 [2024-11-19 05:30:22.624038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:22.624050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:22.624059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:22.625988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:16.249 [2024-11-19 05:30:22.626002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:16.249 [2024-11-19 05:30:22.626010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:28:16.249 [2024-11-19 05:30:22.626019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:22.626057] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:16.249 [2024-11-19 05:30:22.626069] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:28:16.249 [2024-11-19 05:30:22.626079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.249 [2024-11-19 05:30:22.627744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.249 [2024-11-19 05:30:22.642106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.249 [2024-11-19 05:30:22.677941] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.249 [2024-11-19 05:30:27.018594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x182700 00:28:16.249 [2024-11-19 05:30:27.018628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.018718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.018737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x182700 00:28:16.249 [2024-11-19 05:30:27.018803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182700 00:28:16.249 [2024-11-19 05:30:27.018842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.018920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182700 00:28:16.249 [2024-11-19 05:30:27.018939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.018959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.018979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.018990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x182700 00:28:16.249 [2024-11-19 05:30:27.019080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182700 00:28:16.249 [2024-11-19 05:30:27.019101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x182700 00:28:16.249 [2024-11-19 05:30:27.019198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184200 00:28:16.249 [2024-11-19 05:30:27.019217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.019237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.249 [2024-11-19 05:30:27.019247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.249 [2024-11-19 05:30:27.019256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.250 [2024-11-19 05:30:27.019294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.250 [2024-11-19 05:30:27.019371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.250 [2024-11-19 05:30:27.019527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.250 [2024-11-19 05:30:27.019550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.250 [2024-11-19 05:30:27.019569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.250 [2024-11-19 05:30:27.019667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d5800 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184200 00:28:16.250 [2024-11-19 05:30:27.019785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182700 00:28:16.250 [2024-11-19 05:30:27.019861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.250 [2024-11-19 05:30:27.019872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.250 [2024-11-19 05:30:27.019880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.019891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.019900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.019911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.019920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.019930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.019939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.019950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.019959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.019969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.019978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.019988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.019997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x182700 00:28:16.251 [2024-11-19 05:30:27.020542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184200 00:28:16.251 [2024-11-19 05:30:27.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.251 [2024-11-19 05:30:27.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.251 [2024-11-19 05:30:27.020602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.020621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.020640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182700 00:28:16.252 [2024-11-19 05:30:27.020678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x182700 00:28:16.252 [2024-11-19 05:30:27.020717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182700 00:28:16.252 [2024-11-19 05:30:27.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182700 00:28:16.252 [2024-11-19 05:30:27.020775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.020822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.020884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182700 00:28:16.252 [2024-11-19 05:30:27.020922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.020942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182700 00:28:16.252 [2024-11-19 05:30:27.020962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.020981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.020991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.021000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.021011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.021019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.021030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.021050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.021061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.021071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182700 00:28:16.252 [2024-11-19 05:30:27.021080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.021090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.252 [2024-11-19 05:30:27.021099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.021110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184200 00:28:16.252 [2024-11-19 05:30:27.021119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1143000 sqhd:5310 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.023041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:16.252 [2024-11-19 05:30:27.023053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:16.252 [2024-11-19 05:30:27.023061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88464 len:8 PRP1 0x0 PRP2 0x0 00:28:16.252 [2024-11-19 05:30:27.023071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.252 [2024-11-19 05:30:27.023110] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:16.252 [2024-11-19 05:30:27.023121] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:28:16.252 [2024-11-19 05:30:27.023131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.252 [2024-11-19 05:30:27.024931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.252 [2024-11-19 05:30:27.039052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.252 [2024-11-19 05:30:27.069969] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.252 00:28:16.252 Latency(us) 00:28:16.252 [2024-11-19T04:30:32.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.252 [2024-11-19T04:30:32.810Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:16.252 Verification LBA range: start 0x0 length 0x4000 00:28:16.252 NVMe0n1 : 15.00 20144.37 78.69 291.45 0.00 6251.33 491.52 1020054.73 00:28:16.252 [2024-11-19T04:30:32.810Z] =================================================================================================================== 00:28:16.252 [2024-11-19T04:30:32.810Z] Total : 20144.37 78.69 291.45 0.00 6251.33 491.52 1020054.73 00:28:16.252 Received shutdown signal, test time was about 15.000000 seconds 00:28:16.252 00:28:16.252 Latency(us) 00:28:16.252 [2024-11-19T04:30:32.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.252 [2024-11-19T04:30:32.810Z] =================================================================================================================== 00:28:16.252 [2024-11-19T04:30:32.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.252 05:30:32 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:16.252 05:30:32 -- host/failover.sh@65 -- # count=3 00:28:16.252 05:30:32 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:16.252 05:30:32 -- host/failover.sh@73 -- # bdevperf_pid=1956971 00:28:16.252 05:30:32 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:16.252 05:30:32 -- host/failover.sh@75 -- # waitforlisten 1956971 /var/tmp/bdevperf.sock 00:28:16.252 05:30:32 -- common/autotest_common.sh@829 -- # '[' -z 1956971 ']' 00:28:16.252 05:30:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:16.252 05:30:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.253 05:30:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:16.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:16.253 05:30:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.253 05:30:32 -- common/autotest_common.sh@10 -- # set +x 00:28:16.821 05:30:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.821 05:30:33 -- common/autotest_common.sh@862 -- # return 0 00:28:16.821 05:30:33 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:17.081 [2024-11-19 05:30:33.457074] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:17.081 05:30:33 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:17.081 [2024-11-19 05:30:33.641714] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:17.340 05:30:33 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:17.599 NVMe0n1 00:28:17.599 05:30:33 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:17.599 00:28:17.858 05:30:34 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:17.858 00:28:17.858 05:30:34 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:17.858 05:30:34 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:18.117 05:30:34 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.377 05:30:34 -- host/failover.sh@87 -- # sleep 3 00:28:21.670 05:30:37 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:21.670 05:30:37 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:21.670 05:30:37 -- host/failover.sh@90 -- # run_test_pid=1958015 00:28:21.670 05:30:37 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:21.670 05:30:37 -- host/failover.sh@92 -- # wait 1958015 00:28:22.608 0 00:28:22.609 05:30:39 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:22.609 [2024-11-19 05:30:32.474674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:22.609 [2024-11-19 05:30:32.474733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956971 ] 00:28:22.609 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.609 [2024-11-19 05:30:32.545463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.609 [2024-11-19 05:30:32.578591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.609 [2024-11-19 05:30:34.768946] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:22.609 [2024-11-19 05:30:34.769491] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.609 [2024-11-19 05:30:34.769517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.609 [2024-11-19 05:30:34.791334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:22.609 [2024-11-19 05:30:34.807625] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:22.609 Running I/O for 1 seconds... 00:28:22.609 00:28:22.609 Latency(us) 00:28:22.609 [2024-11-19T04:30:39.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.609 [2024-11-19T04:30:39.167Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:22.609 Verification LBA range: start 0x0 length 0x4000 00:28:22.609 NVMe0n1 : 1.00 25326.26 98.93 0.00 0.00 5030.59 1238.63 11324.62 00:28:22.609 [2024-11-19T04:30:39.167Z] =================================================================================================================== 00:28:22.609 [2024-11-19T04:30:39.167Z] Total : 25326.26 98.93 0.00 0.00 5030.59 1238.63 11324.62 00:28:22.609 05:30:39 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.609 05:30:39 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:22.866 05:30:39 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.125 05:30:39 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.125 05:30:39 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:23.385 05:30:39 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.385 05:30:39 -- host/failover.sh@101 -- # sleep 3 00:28:26.677 05:30:42 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.677 05:30:42 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:26.677 05:30:43 -- host/failover.sh@108 -- # killprocess 1956971 00:28:26.677 05:30:43 -- common/autotest_common.sh@936 -- # '[' -z 1956971 ']' 00:28:26.677 05:30:43 -- common/autotest_common.sh@940 -- # kill -0 1956971 00:28:26.677 05:30:43 -- common/autotest_common.sh@941 -- # uname 00:28:26.677 05:30:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:26.677 05:30:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1956971 00:28:26.677 05:30:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:26.677 05:30:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:26.677 05:30:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1956971' 00:28:26.677 killing process with pid 1956971 00:28:26.677 05:30:43 -- common/autotest_common.sh@955 -- # kill 1956971 00:28:26.677 05:30:43 -- common/autotest_common.sh@960 -- # wait 1956971 00:28:26.936 05:30:43 -- host/failover.sh@110 -- # sync 00:28:26.936 05:30:43 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.196 05:30:43 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:27.196 05:30:43 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:27.196 05:30:43 -- host/failover.sh@116 -- # nvmftestfini 00:28:27.196 05:30:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:27.196 05:30:43 -- nvmf/common.sh@116 -- # sync 00:28:27.196 05:30:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:27.196 05:30:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:27.196 05:30:43 -- nvmf/common.sh@119 -- # set +e 00:28:27.196 05:30:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:27.196 05:30:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:27.196 rmmod nvme_rdma 00:28:27.196 rmmod nvme_fabrics 00:28:27.196 05:30:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:27.196 05:30:43 -- nvmf/common.sh@123 -- # set -e 00:28:27.196 05:30:43 -- nvmf/common.sh@124 -- # return 0 00:28:27.196 05:30:43 -- nvmf/common.sh@477 -- # '[' -n 1953693 ']' 00:28:27.196 05:30:43 -- nvmf/common.sh@478 -- # killprocess 1953693 00:28:27.196 05:30:43 -- common/autotest_common.sh@936 -- # '[' -z 1953693 ']' 00:28:27.196 05:30:43 -- common/autotest_common.sh@940 -- # kill -0 1953693 00:28:27.196 05:30:43 -- common/autotest_common.sh@941 -- # uname 00:28:27.196 05:30:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:27.196 05:30:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1953693 00:28:27.196 05:30:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:27.196 05:30:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:27.196 05:30:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1953693' 00:28:27.196 killing process with pid 1953693 00:28:27.196 05:30:43 -- common/autotest_common.sh@955 -- # kill 1953693 00:28:27.197 05:30:43 -- common/autotest_common.sh@960 -- # wait 1953693 00:28:27.457 05:30:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:27.457 05:30:43 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:27.457 00:28:27.457 real 0m37.453s 00:28:27.457 user 2m4.862s 00:28:27.457 sys 0m7.288s 00:28:27.457 05:30:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:27.457 05:30:43 -- common/autotest_common.sh@10 -- # set +x 00:28:27.457 ************************************ 00:28:27.457 END TEST nvmf_failover 00:28:27.457 ************************************ 00:28:27.457 05:30:43 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:27.457 05:30:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:27.457 05:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:27.457 05:30:43 -- common/autotest_common.sh@10 -- # set +x 00:28:27.457 ************************************ 00:28:27.457 START TEST nvmf_discovery 00:28:27.457 ************************************ 00:28:27.457 05:30:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:27.717 * Looking for test storage... 00:28:27.717 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:27.717 05:30:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:27.717 05:30:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:27.717 05:30:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:27.717 05:30:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:27.717 05:30:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:27.717 05:30:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:27.717 05:30:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:27.717 05:30:44 -- scripts/common.sh@335 -- # IFS=.-: 00:28:27.717 05:30:44 -- scripts/common.sh@335 -- # read -ra ver1 00:28:27.717 05:30:44 -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.717 05:30:44 -- scripts/common.sh@336 -- # read -ra ver2 00:28:27.717 05:30:44 -- scripts/common.sh@337 -- # local 'op=<' 00:28:27.717 05:30:44 -- scripts/common.sh@339 -- # ver1_l=2 00:28:27.717 05:30:44 -- scripts/common.sh@340 -- # ver2_l=1 00:28:27.717 05:30:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:27.717 05:30:44 -- scripts/common.sh@343 -- # case "$op" in 00:28:27.717 05:30:44 -- scripts/common.sh@344 -- # : 1 00:28:27.717 05:30:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:27.717 05:30:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.717 05:30:44 -- scripts/common.sh@364 -- # decimal 1 00:28:27.717 05:30:44 -- scripts/common.sh@352 -- # local d=1 00:28:27.717 05:30:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.717 05:30:44 -- scripts/common.sh@354 -- # echo 1 00:28:27.717 05:30:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:27.717 05:30:44 -- scripts/common.sh@365 -- # decimal 2 00:28:27.717 05:30:44 -- scripts/common.sh@352 -- # local d=2 00:28:27.717 05:30:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.717 05:30:44 -- scripts/common.sh@354 -- # echo 2 00:28:27.717 05:30:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:27.717 05:30:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:27.717 05:30:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:27.717 05:30:44 -- scripts/common.sh@367 -- # return 0 00:28:27.717 05:30:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.717 05:30:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:27.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.717 --rc genhtml_branch_coverage=1 00:28:27.717 --rc genhtml_function_coverage=1 00:28:27.717 --rc genhtml_legend=1 00:28:27.717 --rc geninfo_all_blocks=1 00:28:27.717 --rc geninfo_unexecuted_blocks=1 00:28:27.717 00:28:27.717 ' 00:28:27.717 05:30:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:27.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.717 --rc genhtml_branch_coverage=1 00:28:27.717 --rc genhtml_function_coverage=1 00:28:27.717 --rc genhtml_legend=1 00:28:27.717 --rc geninfo_all_blocks=1 00:28:27.717 --rc geninfo_unexecuted_blocks=1 00:28:27.717 00:28:27.717 ' 00:28:27.717 05:30:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:27.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.717 --rc genhtml_branch_coverage=1 00:28:27.717 --rc genhtml_function_coverage=1 00:28:27.717 --rc genhtml_legend=1 00:28:27.717 --rc geninfo_all_blocks=1 00:28:27.717 --rc geninfo_unexecuted_blocks=1 00:28:27.717 00:28:27.717 ' 00:28:27.717 05:30:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:27.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.717 --rc genhtml_branch_coverage=1 00:28:27.717 --rc genhtml_function_coverage=1 00:28:27.717 --rc genhtml_legend=1 00:28:27.717 --rc geninfo_all_blocks=1 00:28:27.717 --rc geninfo_unexecuted_blocks=1 00:28:27.717 00:28:27.717 ' 00:28:27.717 05:30:44 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.717 05:30:44 -- nvmf/common.sh@7 -- # uname -s 00:28:27.717 05:30:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.717 05:30:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.717 05:30:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.717 05:30:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.717 05:30:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.717 05:30:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.717 05:30:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.717 05:30:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.717 05:30:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.717 05:30:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.717 05:30:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:27.717 05:30:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:27.717 05:30:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.717 05:30:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.717 05:30:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.717 05:30:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:27.717 05:30:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.718 05:30:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.718 05:30:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.718 05:30:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.718 05:30:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.718 05:30:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.718 05:30:44 -- paths/export.sh@5 -- # export PATH 00:28:27.718 05:30:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.718 05:30:44 -- nvmf/common.sh@46 -- # : 0 00:28:27.718 05:30:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:27.718 05:30:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:27.718 05:30:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:27.718 05:30:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.718 05:30:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.718 05:30:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:27.718 05:30:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:27.718 05:30:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:27.718 05:30:44 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:27.718 05:30:44 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:27.718 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:27.718 05:30:44 -- host/discovery.sh@13 -- # exit 0 00:28:27.718 00:28:27.718 real 0m0.220s 00:28:27.718 user 0m0.115s 00:28:27.718 sys 0m0.123s 00:28:27.718 05:30:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:27.718 05:30:44 -- common/autotest_common.sh@10 -- # set +x 00:28:27.718 ************************************ 00:28:27.718 END TEST nvmf_discovery 00:28:27.718 ************************************ 00:28:27.718 05:30:44 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:27.718 05:30:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:27.718 05:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:27.718 05:30:44 -- common/autotest_common.sh@10 -- # set +x 00:28:27.718 ************************************ 00:28:27.718 START TEST nvmf_discovery_remove_ifc 00:28:27.718 ************************************ 00:28:27.718 05:30:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:27.978 * Looking for test storage... 00:28:27.978 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:27.978 05:30:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:27.978 05:30:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:27.978 05:30:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:27.978 05:30:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:27.978 05:30:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:27.978 05:30:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:27.978 05:30:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:27.978 05:30:44 -- scripts/common.sh@335 -- # IFS=.-: 00:28:27.978 05:30:44 -- scripts/common.sh@335 -- # read -ra ver1 00:28:27.978 05:30:44 -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.978 05:30:44 -- scripts/common.sh@336 -- # read -ra ver2 00:28:27.978 05:30:44 -- scripts/common.sh@337 -- # local 'op=<' 00:28:27.978 05:30:44 -- scripts/common.sh@339 -- # ver1_l=2 00:28:27.978 05:30:44 -- scripts/common.sh@340 -- # ver2_l=1 00:28:27.978 05:30:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:27.978 05:30:44 -- scripts/common.sh@343 -- # case "$op" in 00:28:27.978 05:30:44 -- scripts/common.sh@344 -- # : 1 00:28:27.978 05:30:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:27.978 05:30:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.978 05:30:44 -- scripts/common.sh@364 -- # decimal 1 00:28:27.978 05:30:44 -- scripts/common.sh@352 -- # local d=1 00:28:27.978 05:30:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.978 05:30:44 -- scripts/common.sh@354 -- # echo 1 00:28:27.978 05:30:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:27.978 05:30:44 -- scripts/common.sh@365 -- # decimal 2 00:28:27.978 05:30:44 -- scripts/common.sh@352 -- # local d=2 00:28:27.978 05:30:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.978 05:30:44 -- scripts/common.sh@354 -- # echo 2 00:28:27.978 05:30:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:27.978 05:30:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:27.978 05:30:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:27.978 05:30:44 -- scripts/common.sh@367 -- # return 0 00:28:27.978 05:30:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.978 05:30:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.978 --rc genhtml_branch_coverage=1 00:28:27.978 --rc genhtml_function_coverage=1 00:28:27.978 --rc genhtml_legend=1 00:28:27.978 --rc geninfo_all_blocks=1 00:28:27.978 --rc geninfo_unexecuted_blocks=1 00:28:27.978 00:28:27.978 ' 00:28:27.978 05:30:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.978 --rc genhtml_branch_coverage=1 00:28:27.978 --rc genhtml_function_coverage=1 00:28:27.978 --rc genhtml_legend=1 00:28:27.978 --rc geninfo_all_blocks=1 00:28:27.978 --rc geninfo_unexecuted_blocks=1 00:28:27.978 00:28:27.978 ' 00:28:27.978 05:30:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.978 --rc genhtml_branch_coverage=1 00:28:27.978 --rc genhtml_function_coverage=1 00:28:27.978 --rc genhtml_legend=1 00:28:27.978 --rc geninfo_all_blocks=1 00:28:27.978 --rc geninfo_unexecuted_blocks=1 00:28:27.978 00:28:27.978 ' 00:28:27.978 05:30:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.978 --rc genhtml_branch_coverage=1 00:28:27.978 --rc genhtml_function_coverage=1 00:28:27.978 --rc genhtml_legend=1 00:28:27.978 --rc geninfo_all_blocks=1 00:28:27.978 --rc geninfo_unexecuted_blocks=1 00:28:27.978 00:28:27.978 ' 00:28:27.978 05:30:44 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.978 05:30:44 -- nvmf/common.sh@7 -- # uname -s 00:28:27.978 05:30:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.978 05:30:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.978 05:30:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.978 05:30:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.978 05:30:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.978 05:30:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.978 05:30:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.978 05:30:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.978 05:30:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.979 05:30:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.979 05:30:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:27.979 05:30:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:27.979 05:30:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.979 05:30:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.979 05:30:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.979 05:30:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:27.979 05:30:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.979 05:30:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.979 05:30:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.979 05:30:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.979 05:30:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.979 05:30:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.979 05:30:44 -- paths/export.sh@5 -- # export PATH 00:28:27.979 05:30:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.979 05:30:44 -- nvmf/common.sh@46 -- # : 0 00:28:27.979 05:30:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:27.979 05:30:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:27.979 05:30:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:27.979 05:30:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.979 05:30:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.979 05:30:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:27.979 05:30:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:27.979 05:30:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:27.979 05:30:44 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:27.979 05:30:44 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:27.979 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:27.979 05:30:44 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:27.979 00:28:27.979 real 0m0.212s 00:28:27.979 user 0m0.123s 00:28:27.979 sys 0m0.107s 00:28:27.979 05:30:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:27.979 05:30:44 -- common/autotest_common.sh@10 -- # set +x 00:28:27.979 ************************************ 00:28:27.979 END TEST nvmf_discovery_remove_ifc 00:28:27.979 ************************************ 00:28:27.979 05:30:44 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:27.979 05:30:44 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:27.979 05:30:44 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:27.979 05:30:44 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:27.979 05:30:44 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:27.979 05:30:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:27.979 05:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:27.979 05:30:44 -- common/autotest_common.sh@10 -- # set +x 00:28:27.979 ************************************ 00:28:27.979 START TEST nvmf_bdevperf 00:28:27.979 ************************************ 00:28:27.979 05:30:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:28.239 * Looking for test storage... 00:28:28.239 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:28.239 05:30:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:28.239 05:30:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:28.239 05:30:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:28.239 05:30:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:28.239 05:30:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:28.239 05:30:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:28.239 05:30:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:28.239 05:30:44 -- scripts/common.sh@335 -- # IFS=.-: 00:28:28.239 05:30:44 -- scripts/common.sh@335 -- # read -ra ver1 00:28:28.239 05:30:44 -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.239 05:30:44 -- scripts/common.sh@336 -- # read -ra ver2 00:28:28.239 05:30:44 -- scripts/common.sh@337 -- # local 'op=<' 00:28:28.239 05:30:44 -- scripts/common.sh@339 -- # ver1_l=2 00:28:28.239 05:30:44 -- scripts/common.sh@340 -- # ver2_l=1 00:28:28.239 05:30:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:28.239 05:30:44 -- scripts/common.sh@343 -- # case "$op" in 00:28:28.239 05:30:44 -- scripts/common.sh@344 -- # : 1 00:28:28.239 05:30:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:28.239 05:30:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.239 05:30:44 -- scripts/common.sh@364 -- # decimal 1 00:28:28.239 05:30:44 -- scripts/common.sh@352 -- # local d=1 00:28:28.239 05:30:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.239 05:30:44 -- scripts/common.sh@354 -- # echo 1 00:28:28.239 05:30:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:28.239 05:30:44 -- scripts/common.sh@365 -- # decimal 2 00:28:28.239 05:30:44 -- scripts/common.sh@352 -- # local d=2 00:28:28.239 05:30:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.239 05:30:44 -- scripts/common.sh@354 -- # echo 2 00:28:28.239 05:30:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:28.239 05:30:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:28.239 05:30:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:28.239 05:30:44 -- scripts/common.sh@367 -- # return 0 00:28:28.239 05:30:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.239 05:30:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:28.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.239 --rc genhtml_branch_coverage=1 00:28:28.239 --rc genhtml_function_coverage=1 00:28:28.239 --rc genhtml_legend=1 00:28:28.239 --rc geninfo_all_blocks=1 00:28:28.239 --rc geninfo_unexecuted_blocks=1 00:28:28.239 00:28:28.239 ' 00:28:28.239 05:30:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:28.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.239 --rc genhtml_branch_coverage=1 00:28:28.239 --rc genhtml_function_coverage=1 00:28:28.239 --rc genhtml_legend=1 00:28:28.239 --rc geninfo_all_blocks=1 00:28:28.239 --rc geninfo_unexecuted_blocks=1 00:28:28.239 00:28:28.239 ' 00:28:28.239 05:30:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:28.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.239 --rc genhtml_branch_coverage=1 00:28:28.239 --rc genhtml_function_coverage=1 00:28:28.239 --rc genhtml_legend=1 00:28:28.239 --rc geninfo_all_blocks=1 00:28:28.239 --rc geninfo_unexecuted_blocks=1 00:28:28.239 00:28:28.239 ' 00:28:28.239 05:30:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:28.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.239 --rc genhtml_branch_coverage=1 00:28:28.239 --rc genhtml_function_coverage=1 00:28:28.239 --rc genhtml_legend=1 00:28:28.239 --rc geninfo_all_blocks=1 00:28:28.239 --rc geninfo_unexecuted_blocks=1 00:28:28.239 00:28:28.239 ' 00:28:28.239 05:30:44 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.239 05:30:44 -- nvmf/common.sh@7 -- # uname -s 00:28:28.239 05:30:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.239 05:30:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.239 05:30:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.239 05:30:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.239 05:30:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.239 05:30:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.239 05:30:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.239 05:30:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.239 05:30:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.239 05:30:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.239 05:30:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:28.239 05:30:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:28.239 05:30:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.239 05:30:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.239 05:30:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.239 05:30:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:28.239 05:30:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.239 05:30:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.239 05:30:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.239 05:30:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.239 05:30:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.239 05:30:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.239 05:30:44 -- paths/export.sh@5 -- # export PATH 00:28:28.239 05:30:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.239 05:30:44 -- nvmf/common.sh@46 -- # : 0 00:28:28.239 05:30:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:28.239 05:30:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:28.239 05:30:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:28.239 05:30:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.239 05:30:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.239 05:30:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:28.239 05:30:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:28.239 05:30:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:28.239 05:30:44 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:28.239 05:30:44 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:28.239 05:30:44 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:28.239 05:30:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:28.239 05:30:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.239 05:30:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:28.239 05:30:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:28.239 05:30:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:28.239 05:30:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.239 05:30:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.239 05:30:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.239 05:30:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:28.239 05:30:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:28.239 05:30:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:28.239 05:30:44 -- common/autotest_common.sh@10 -- # set +x 00:28:34.811 05:30:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:34.811 05:30:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:34.811 05:30:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:34.811 05:30:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:34.811 05:30:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:34.811 05:30:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:34.811 05:30:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:34.811 05:30:51 -- nvmf/common.sh@294 -- # net_devs=() 00:28:34.811 05:30:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:34.811 05:30:51 -- nvmf/common.sh@295 -- # e810=() 00:28:34.811 05:30:51 -- nvmf/common.sh@295 -- # local -ga e810 00:28:34.811 05:30:51 -- nvmf/common.sh@296 -- # x722=() 00:28:34.811 05:30:51 -- nvmf/common.sh@296 -- # local -ga x722 00:28:34.811 05:30:51 -- nvmf/common.sh@297 -- # mlx=() 00:28:34.811 05:30:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:34.811 05:30:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.811 05:30:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:34.811 05:30:51 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:34.811 05:30:51 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:34.811 05:30:51 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:34.811 05:30:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:34.811 05:30:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:34.811 05:30:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:34.811 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:34.811 05:30:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:34.811 05:30:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:34.811 05:30:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:34.811 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:34.811 05:30:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:34.811 05:30:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:34.811 05:30:51 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:34.811 05:30:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:34.811 05:30:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.811 05:30:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:34.811 05:30:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.811 05:30:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:34.811 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:34.811 05:30:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.811 05:30:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:34.811 05:30:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.811 05:30:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:34.811 05:30:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.811 05:30:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:34.811 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:34.811 05:30:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.812 05:30:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:34.812 05:30:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:34.812 05:30:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:34.812 05:30:51 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:34.812 05:30:51 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:34.812 05:30:51 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:34.812 05:30:51 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:34.812 05:30:51 -- nvmf/common.sh@57 -- # uname 00:28:34.812 05:30:51 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:34.812 05:30:51 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:34.812 05:30:51 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:34.812 05:30:51 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:34.812 05:30:51 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:34.812 05:30:51 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:34.812 05:30:51 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:34.812 05:30:51 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:34.812 05:30:51 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:34.812 05:30:51 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:34.812 05:30:51 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:34.812 05:30:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:34.812 05:30:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:34.812 05:30:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:35.072 05:30:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:35.072 05:30:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:35.072 05:30:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@104 -- # continue 2 00:28:35.072 05:30:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@104 -- # continue 2 00:28:35.072 05:30:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:35.072 05:30:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:35.072 05:30:51 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:35.072 05:30:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:35.072 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:35.072 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:35.072 altname enp217s0f0np0 00:28:35.072 altname ens818f0np0 00:28:35.072 inet 192.168.100.8/24 scope global mlx_0_0 00:28:35.072 valid_lft forever preferred_lft forever 00:28:35.072 05:30:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:35.072 05:30:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:35.072 05:30:51 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:35.072 05:30:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:35.072 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:35.072 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:35.072 altname enp217s0f1np1 00:28:35.072 altname ens818f1np1 00:28:35.072 inet 192.168.100.9/24 scope global mlx_0_1 00:28:35.072 valid_lft forever preferred_lft forever 00:28:35.072 05:30:51 -- nvmf/common.sh@410 -- # return 0 00:28:35.072 05:30:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:35.072 05:30:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:35.072 05:30:51 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:35.072 05:30:51 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:35.072 05:30:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:35.072 05:30:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:35.072 05:30:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:35.072 05:30:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:35.072 05:30:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:35.072 05:30:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@104 -- # continue 2 00:28:35.072 05:30:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:35.072 05:30:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:35.072 05:30:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@104 -- # continue 2 00:28:35.072 05:30:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:35.072 05:30:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:35.072 05:30:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:35.072 05:30:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:35.072 05:30:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:35.072 05:30:51 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:35.072 192.168.100.9' 00:28:35.072 05:30:51 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:35.072 192.168.100.9' 00:28:35.072 05:30:51 -- nvmf/common.sh@445 -- # head -n 1 00:28:35.072 05:30:51 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:35.072 05:30:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:35.072 192.168.100.9' 00:28:35.072 05:30:51 -- nvmf/common.sh@446 -- # tail -n +2 00:28:35.072 05:30:51 -- nvmf/common.sh@446 -- # head -n 1 00:28:35.072 05:30:51 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:35.072 05:30:51 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:35.072 05:30:51 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:35.072 05:30:51 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:35.072 05:30:51 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:35.072 05:30:51 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:35.072 05:30:51 -- host/bdevperf.sh@25 -- # tgt_init 00:28:35.072 05:30:51 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:35.072 05:30:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:35.072 05:30:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:35.072 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:28:35.072 05:30:51 -- nvmf/common.sh@469 -- # nvmfpid=1962431 00:28:35.073 05:30:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:35.073 05:30:51 -- nvmf/common.sh@470 -- # waitforlisten 1962431 00:28:35.073 05:30:51 -- common/autotest_common.sh@829 -- # '[' -z 1962431 ']' 00:28:35.073 05:30:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.073 05:30:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:35.073 05:30:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.073 05:30:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:35.073 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:28:35.073 [2024-11-19 05:30:51.600241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:35.073 [2024-11-19 05:30:51.600291] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.073 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.332 [2024-11-19 05:30:51.670569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:35.332 [2024-11-19 05:30:51.707873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.332 [2024-11-19 05:30:51.707988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.332 [2024-11-19 05:30:51.707999] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.332 [2024-11-19 05:30:51.708007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.332 [2024-11-19 05:30:51.708051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.332 [2024-11-19 05:30:51.708157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.332 [2024-11-19 05:30:51.708159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.901 05:30:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.901 05:30:52 -- common/autotest_common.sh@862 -- # return 0 00:28:35.901 05:30:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:35.901 05:30:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.901 05:30:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.161 05:30:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.161 05:30:52 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:36.161 05:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.161 05:30:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.161 [2024-11-19 05:30:52.496261] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xada9c0/0xadeeb0) succeed. 00:28:36.161 [2024-11-19 05:30:52.505355] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xadbf10/0xb20550) succeed. 00:28:36.161 05:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.161 05:30:52 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:36.161 05:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.161 05:30:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.161 Malloc0 00:28:36.161 05:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.161 05:30:52 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.161 05:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.161 05:30:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.161 05:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.161 05:30:52 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:36.161 05:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.161 05:30:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.161 05:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.161 05:30:52 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:36.161 05:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.161 05:30:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.161 [2024-11-19 05:30:52.652438] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:36.161 05:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.161 05:30:52 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:36.161 05:30:52 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:36.161 05:30:52 -- nvmf/common.sh@520 -- # config=() 00:28:36.161 05:30:52 -- nvmf/common.sh@520 -- # local subsystem config 00:28:36.161 05:30:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:36.161 05:30:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:36.161 { 00:28:36.161 "params": { 00:28:36.161 "name": "Nvme$subsystem", 00:28:36.161 "trtype": "$TEST_TRANSPORT", 00:28:36.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.161 "adrfam": "ipv4", 00:28:36.161 "trsvcid": "$NVMF_PORT", 00:28:36.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.161 "hdgst": ${hdgst:-false}, 00:28:36.161 "ddgst": ${ddgst:-false} 00:28:36.161 }, 00:28:36.161 "method": "bdev_nvme_attach_controller" 00:28:36.161 } 00:28:36.161 EOF 00:28:36.161 )") 00:28:36.161 05:30:52 -- nvmf/common.sh@542 -- # cat 00:28:36.161 05:30:52 -- nvmf/common.sh@544 -- # jq . 00:28:36.161 05:30:52 -- nvmf/common.sh@545 -- # IFS=, 00:28:36.161 05:30:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:36.161 "params": { 00:28:36.161 "name": "Nvme1", 00:28:36.161 "trtype": "rdma", 00:28:36.161 "traddr": "192.168.100.8", 00:28:36.161 "adrfam": "ipv4", 00:28:36.161 "trsvcid": "4420", 00:28:36.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.161 "hdgst": false, 00:28:36.161 "ddgst": false 00:28:36.161 }, 00:28:36.161 "method": "bdev_nvme_attach_controller" 00:28:36.161 }' 00:28:36.161 [2024-11-19 05:30:52.703816] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:36.161 [2024-11-19 05:30:52.703868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962715 ] 00:28:36.421 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.421 [2024-11-19 05:30:52.775934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.421 [2024-11-19 05:30:52.812920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.421 Running I/O for 1 seconds... 00:28:37.802 00:28:37.802 Latency(us) 00:28:37.802 [2024-11-19T04:30:54.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.802 [2024-11-19T04:30:54.360Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.802 Verification LBA range: start 0x0 length 0x4000 00:28:37.802 Nvme1n1 : 1.00 25703.28 100.40 0.00 0.00 4956.27 1120.67 11744.05 00:28:37.802 [2024-11-19T04:30:54.360Z] =================================================================================================================== 00:28:37.802 [2024-11-19T04:30:54.360Z] Total : 25703.28 100.40 0.00 0.00 4956.27 1120.67 11744.05 00:28:37.802 05:30:54 -- host/bdevperf.sh@30 -- # bdevperfpid=1962985 00:28:37.802 05:30:54 -- host/bdevperf.sh@32 -- # sleep 3 00:28:37.802 05:30:54 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:37.802 05:30:54 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:37.802 05:30:54 -- nvmf/common.sh@520 -- # config=() 00:28:37.802 05:30:54 -- nvmf/common.sh@520 -- # local subsystem config 00:28:37.802 05:30:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:37.802 05:30:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:37.802 { 00:28:37.802 "params": { 00:28:37.802 "name": "Nvme$subsystem", 00:28:37.802 "trtype": "$TEST_TRANSPORT", 00:28:37.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.802 "adrfam": "ipv4", 00:28:37.802 "trsvcid": "$NVMF_PORT", 00:28:37.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.802 "hdgst": ${hdgst:-false}, 00:28:37.802 "ddgst": ${ddgst:-false} 00:28:37.802 }, 00:28:37.802 "method": "bdev_nvme_attach_controller" 00:28:37.802 } 00:28:37.802 EOF 00:28:37.802 )") 00:28:37.802 05:30:54 -- nvmf/common.sh@542 -- # cat 00:28:37.802 05:30:54 -- nvmf/common.sh@544 -- # jq . 00:28:37.802 05:30:54 -- nvmf/common.sh@545 -- # IFS=, 00:28:37.802 05:30:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:37.802 "params": { 00:28:37.802 "name": "Nvme1", 00:28:37.802 "trtype": "rdma", 00:28:37.802 "traddr": "192.168.100.8", 00:28:37.802 "adrfam": "ipv4", 00:28:37.802 "trsvcid": "4420", 00:28:37.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.802 "hdgst": false, 00:28:37.802 "ddgst": false 00:28:37.802 }, 00:28:37.802 "method": "bdev_nvme_attach_controller" 00:28:37.802 }' 00:28:37.802 [2024-11-19 05:30:54.225541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:37.802 [2024-11-19 05:30:54.225597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962985 ] 00:28:37.802 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.802 [2024-11-19 05:30:54.295854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.802 [2024-11-19 05:30:54.329001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.061 Running I/O for 15 seconds... 00:28:41.351 05:30:57 -- host/bdevperf.sh@33 -- # kill -9 1962431 00:28:41.351 05:30:57 -- host/bdevperf.sh@35 -- # sleep 3 00:28:41.922 [2024-11-19 05:30:58.218191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182e00 00:28:41.922 [2024-11-19 05:30:58.218237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x182e00 00:28:41.922 [2024-11-19 05:30:58.218350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182e00 00:28:41.922 [2024-11-19 05:30:58.218444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x182e00 00:28:41.922 [2024-11-19 05:30:58.218559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x182e00 00:28:41.922 [2024-11-19 05:30:58.218599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184200 00:28:41.922 [2024-11-19 05:30:58.218655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182e00 00:28:41.922 [2024-11-19 05:30:58.218674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182e00 00:28:41.922 [2024-11-19 05:30:58.218693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.922 [2024-11-19 05:30:58.218742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.922 [2024-11-19 05:30:58.218751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.218769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.218789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.218808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.218826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.218845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.218863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.218882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.218901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.218919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.218938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.218956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.218975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.218985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.218993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.219054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.219264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.923 [2024-11-19 05:30:58.219358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184200 00:28:41.923 [2024-11-19 05:30:58.219432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.923 [2024-11-19 05:30:58.219442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182e00 00:28:41.923 [2024-11-19 05:30:58.219454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.219529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.219643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.219830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.219848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.219906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.219925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.219943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.219982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.219992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.220000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184200 00:28:41.924 [2024-11-19 05:30:58.220019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.220038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.220056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.220075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.220094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.924 [2024-11-19 05:30:58.220114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182e00 00:28:41.924 [2024-11-19 05:30:58.220133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.924 [2024-11-19 05:30:58.220143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184200 00:28:41.925 [2024-11-19 05:30:58.220188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184200 00:28:41.925 [2024-11-19 05:30:58.220244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184200 00:28:41.925 [2024-11-19 05:30:58.220281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184200 00:28:41.925 [2024-11-19 05:30:58.220338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184200 00:28:41.925 [2024-11-19 05:30:58.220356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184200 00:28:41.925 [2024-11-19 05:30:58.220449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.925 [2024-11-19 05:30:58.220622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.220632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x182e00 00:28:41.925 [2024-11-19 05:30:58.220640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4419c000 sqhd:5310 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.222576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:41.925 [2024-11-19 05:30:58.222590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:41.925 [2024-11-19 05:30:58.222598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31752 len:8 PRP1 0x0 PRP2 0x0 00:28:41.925 [2024-11-19 05:30:58.222608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.925 [2024-11-19 05:30:58.222650] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:41.925 [2024-11-19 05:30:58.224369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.925 [2024-11-19 05:30:58.238763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:41.925 [2024-11-19 05:30:58.241059] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:41.925 [2024-11-19 05:30:58.241078] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:41.925 [2024-11-19 05:30:58.241092] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:42.872 [2024-11-19 05:30:59.245205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:42.872 [2024-11-19 05:30:59.245275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.872 [2024-11-19 05:30:59.245724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.872 [2024-11-19 05:30:59.245740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.872 [2024-11-19 05:30:59.245753] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:42.872 [2024-11-19 05:30:59.246056] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:42.872 [2024-11-19 05:30:59.248066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.872 [2024-11-19 05:30:59.257847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.872 [2024-11-19 05:30:59.260220] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:42.872 [2024-11-19 05:30:59.260240] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:42.872 [2024-11-19 05:30:59.260248] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:43.810 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1962431 Killed "${NVMF_APP[@]}" "$@" 00:28:43.810 05:31:00 -- host/bdevperf.sh@36 -- # tgt_init 00:28:43.810 05:31:00 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:43.810 05:31:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:43.810 05:31:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:43.810 05:31:00 -- common/autotest_common.sh@10 -- # set +x 00:28:43.810 05:31:00 -- nvmf/common.sh@469 -- # nvmfpid=1963941 00:28:43.810 05:31:00 -- nvmf/common.sh@470 -- # waitforlisten 1963941 00:28:43.810 05:31:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:43.810 05:31:00 -- common/autotest_common.sh@829 -- # '[' -z 1963941 ']' 00:28:43.810 05:31:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.810 05:31:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.810 05:31:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.810 05:31:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.810 05:31:00 -- common/autotest_common.sh@10 -- # set +x 00:28:43.810 [2024-11-19 05:31:00.247497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:43.810 [2024-11-19 05:31:00.247557] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.810 [2024-11-19 05:31:00.264123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:43.810 [2024-11-19 05:31:00.264153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.810 [2024-11-19 05:31:00.264257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.810 [2024-11-19 05:31:00.264268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.810 [2024-11-19 05:31:00.264278] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:43.810 [2024-11-19 05:31:00.265537] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:43.810 [2024-11-19 05:31:00.265897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.810 [2024-11-19 05:31:00.277307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.810 [2024-11-19 05:31:00.279471] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:43.810 [2024-11-19 05:31:00.279492] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:43.810 [2024-11-19 05:31:00.279500] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:43.810 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.810 [2024-11-19 05:31:00.318979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.810 [2024-11-19 05:31:00.356473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:43.810 [2024-11-19 05:31:00.356588] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.810 [2024-11-19 05:31:00.356598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.810 [2024-11-19 05:31:00.356611] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.810 [2024-11-19 05:31:00.356663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.810 [2024-11-19 05:31:00.356753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.810 [2024-11-19 05:31:00.356755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.748 05:31:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.748 05:31:01 -- common/autotest_common.sh@862 -- # return 0 00:28:44.748 05:31:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:44.748 05:31:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:44.748 05:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.748 05:31:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.748 05:31:01 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:44.748 05:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.748 05:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.748 [2024-11-19 05:31:01.150213] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bd89c0/0x1bdceb0) succeed. 00:28:44.748 [2024-11-19 05:31:01.159434] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bd9f10/0x1c1e550) succeed. 00:28:44.748 05:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.748 05:31:01 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:44.748 05:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.748 05:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.748 Malloc0 00:28:44.748 05:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.748 [2024-11-19 05:31:01.283957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:44.748 [2024-11-19 05:31:01.283994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.748 [2024-11-19 05:31:01.284110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.748 [2024-11-19 05:31:01.284122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.748 [2024-11-19 05:31:01.284132] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:44.748 05:31:01 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.748 05:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.748 05:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.748 [2024-11-19 05:31:01.285113] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.748 [2024-11-19 05:31:01.285717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.748 05:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.748 05:31:01 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:44.748 05:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.748 05:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.748 [2024-11-19 05:31:01.296872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.749 [2024-11-19 05:31:01.299171] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:44.749 [2024-11-19 05:31:01.299192] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:44.749 [2024-11-19 05:31:01.299200] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:44.749 05:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.749 05:31:01 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:44.749 05:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.749 05:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.749 [2024-11-19 05:31:01.307041] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:45.008 05:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.008 05:31:01 -- host/bdevperf.sh@38 -- # wait 1962985 00:28:45.992 [2024-11-19 05:31:02.303050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:45.992 [2024-11-19 05:31:02.303075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.992 [2024-11-19 05:31:02.303191] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.992 [2024-11-19 05:31:02.303202] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.992 [2024-11-19 05:31:02.303212] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:45.992 [2024-11-19 05:31:02.304476] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:45.992 [2024-11-19 05:31:02.304951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.992 [2024-11-19 05:31:02.316601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.992 [2024-11-19 05:31:02.351692] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:54.133 00:28:54.133 Latency(us) 00:28:54.133 [2024-11-19T04:31:10.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.134 [2024-11-19T04:31:10.692Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:54.134 Verification LBA range: start 0x0 length 0x4000 00:28:54.134 Nvme1n1 : 15.00 16854.03 65.84 21736.01 0.00 3305.67 488.24 1033476.51 00:28:54.134 [2024-11-19T04:31:10.692Z] =================================================================================================================== 00:28:54.134 [2024-11-19T04:31:10.692Z] Total : 16854.03 65.84 21736.01 0.00 3305.67 488.24 1033476.51 00:28:54.134 05:31:09 -- host/bdevperf.sh@39 -- # sync 00:28:54.134 05:31:09 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:54.134 05:31:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.134 05:31:09 -- common/autotest_common.sh@10 -- # set +x 00:28:54.134 05:31:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.134 05:31:09 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:54.134 05:31:09 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:54.134 05:31:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:54.134 05:31:09 -- nvmf/common.sh@116 -- # sync 00:28:54.134 05:31:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:54.134 05:31:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:54.134 05:31:09 -- nvmf/common.sh@119 -- # set +e 00:28:54.134 05:31:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:54.134 05:31:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:54.134 rmmod nvme_rdma 00:28:54.134 rmmod nvme_fabrics 00:28:54.134 05:31:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:54.134 05:31:09 -- nvmf/common.sh@123 -- # set -e 00:28:54.134 05:31:09 -- nvmf/common.sh@124 -- # return 0 00:28:54.134 05:31:09 -- nvmf/common.sh@477 -- # '[' -n 1963941 ']' 00:28:54.134 05:31:09 -- nvmf/common.sh@478 -- # killprocess 1963941 00:28:54.134 05:31:09 -- common/autotest_common.sh@936 -- # '[' -z 1963941 ']' 00:28:54.134 05:31:09 -- common/autotest_common.sh@940 -- # kill -0 1963941 00:28:54.134 05:31:09 -- common/autotest_common.sh@941 -- # uname 00:28:54.134 05:31:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:54.134 05:31:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1963941 00:28:54.134 05:31:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:54.134 05:31:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:54.134 05:31:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1963941' 00:28:54.134 killing process with pid 1963941 00:28:54.134 05:31:09 -- common/autotest_common.sh@955 -- # kill 1963941 00:28:54.134 05:31:09 -- common/autotest_common.sh@960 -- # wait 1963941 00:28:54.134 05:31:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:54.134 05:31:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:54.134 00:28:54.134 real 0m25.640s 00:28:54.134 user 1m4.429s 00:28:54.134 sys 0m6.443s 00:28:54.134 05:31:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:54.134 05:31:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.134 ************************************ 00:28:54.134 END TEST nvmf_bdevperf 00:28:54.134 ************************************ 00:28:54.134 05:31:10 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:54.134 05:31:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:54.134 05:31:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:54.134 05:31:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.134 ************************************ 00:28:54.134 START TEST nvmf_target_disconnect 00:28:54.134 ************************************ 00:28:54.134 05:31:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:54.134 * Looking for test storage... 00:28:54.134 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:54.134 05:31:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:54.134 05:31:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:54.134 05:31:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:54.134 05:31:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:54.134 05:31:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:54.134 05:31:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:54.134 05:31:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:54.134 05:31:10 -- scripts/common.sh@335 -- # IFS=.-: 00:28:54.134 05:31:10 -- scripts/common.sh@335 -- # read -ra ver1 00:28:54.134 05:31:10 -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.134 05:31:10 -- scripts/common.sh@336 -- # read -ra ver2 00:28:54.134 05:31:10 -- scripts/common.sh@337 -- # local 'op=<' 00:28:54.134 05:31:10 -- scripts/common.sh@339 -- # ver1_l=2 00:28:54.134 05:31:10 -- scripts/common.sh@340 -- # ver2_l=1 00:28:54.134 05:31:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:54.134 05:31:10 -- scripts/common.sh@343 -- # case "$op" in 00:28:54.134 05:31:10 -- scripts/common.sh@344 -- # : 1 00:28:54.134 05:31:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:54.134 05:31:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.134 05:31:10 -- scripts/common.sh@364 -- # decimal 1 00:28:54.134 05:31:10 -- scripts/common.sh@352 -- # local d=1 00:28:54.134 05:31:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.134 05:31:10 -- scripts/common.sh@354 -- # echo 1 00:28:54.134 05:31:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:54.134 05:31:10 -- scripts/common.sh@365 -- # decimal 2 00:28:54.134 05:31:10 -- scripts/common.sh@352 -- # local d=2 00:28:54.134 05:31:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.134 05:31:10 -- scripts/common.sh@354 -- # echo 2 00:28:54.134 05:31:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:54.134 05:31:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:54.134 05:31:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:54.134 05:31:10 -- scripts/common.sh@367 -- # return 0 00:28:54.134 05:31:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.134 05:31:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:54.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.134 --rc genhtml_branch_coverage=1 00:28:54.134 --rc genhtml_function_coverage=1 00:28:54.134 --rc genhtml_legend=1 00:28:54.134 --rc geninfo_all_blocks=1 00:28:54.134 --rc geninfo_unexecuted_blocks=1 00:28:54.134 00:28:54.134 ' 00:28:54.134 05:31:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:54.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.134 --rc genhtml_branch_coverage=1 00:28:54.134 --rc genhtml_function_coverage=1 00:28:54.134 --rc genhtml_legend=1 00:28:54.134 --rc geninfo_all_blocks=1 00:28:54.134 --rc geninfo_unexecuted_blocks=1 00:28:54.134 00:28:54.134 ' 00:28:54.134 05:31:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:54.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.134 --rc genhtml_branch_coverage=1 00:28:54.134 --rc genhtml_function_coverage=1 00:28:54.134 --rc genhtml_legend=1 00:28:54.134 --rc geninfo_all_blocks=1 00:28:54.134 --rc geninfo_unexecuted_blocks=1 00:28:54.134 00:28:54.134 ' 00:28:54.134 05:31:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:54.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.134 --rc genhtml_branch_coverage=1 00:28:54.134 --rc genhtml_function_coverage=1 00:28:54.134 --rc genhtml_legend=1 00:28:54.134 --rc geninfo_all_blocks=1 00:28:54.134 --rc geninfo_unexecuted_blocks=1 00:28:54.134 00:28:54.134 ' 00:28:54.134 05:31:10 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.134 05:31:10 -- nvmf/common.sh@7 -- # uname -s 00:28:54.134 05:31:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.134 05:31:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.134 05:31:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.134 05:31:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.134 05:31:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.134 05:31:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.134 05:31:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.134 05:31:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.134 05:31:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.134 05:31:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.134 05:31:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:54.134 05:31:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:54.134 05:31:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.134 05:31:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.134 05:31:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.134 05:31:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:54.134 05:31:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.134 05:31:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.134 05:31:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.134 05:31:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.134 05:31:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.135 05:31:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.135 05:31:10 -- paths/export.sh@5 -- # export PATH 00:28:54.135 05:31:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.135 05:31:10 -- nvmf/common.sh@46 -- # : 0 00:28:54.135 05:31:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:54.135 05:31:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:54.135 05:31:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:54.135 05:31:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.135 05:31:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.135 05:31:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:54.135 05:31:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:54.135 05:31:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:54.135 05:31:10 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:54.135 05:31:10 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:54.135 05:31:10 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:54.135 05:31:10 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:54.135 05:31:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:54.135 05:31:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.135 05:31:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:54.135 05:31:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:54.135 05:31:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:54.135 05:31:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.135 05:31:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:54.135 05:31:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.135 05:31:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:54.135 05:31:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:54.135 05:31:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:54.135 05:31:10 -- common/autotest_common.sh@10 -- # set +x 00:29:00.712 05:31:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:00.712 05:31:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:00.712 05:31:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:00.712 05:31:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:00.712 05:31:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:00.712 05:31:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:00.712 05:31:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:00.712 05:31:16 -- nvmf/common.sh@294 -- # net_devs=() 00:29:00.712 05:31:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:00.712 05:31:16 -- nvmf/common.sh@295 -- # e810=() 00:29:00.712 05:31:16 -- nvmf/common.sh@295 -- # local -ga e810 00:29:00.712 05:31:16 -- nvmf/common.sh@296 -- # x722=() 00:29:00.712 05:31:16 -- nvmf/common.sh@296 -- # local -ga x722 00:29:00.712 05:31:16 -- nvmf/common.sh@297 -- # mlx=() 00:29:00.712 05:31:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:00.712 05:31:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.712 05:31:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:00.712 05:31:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:00.712 05:31:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:00.712 05:31:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:00.712 05:31:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:00.712 05:31:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:00.712 05:31:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:00.712 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:00.712 05:31:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:00.712 05:31:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:00.712 05:31:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:00.712 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:00.712 05:31:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:00.712 05:31:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:00.712 05:31:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:00.712 05:31:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:00.712 05:31:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.712 05:31:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:00.712 05:31:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.712 05:31:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:00.712 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:00.712 05:31:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.712 05:31:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:00.712 05:31:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.712 05:31:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:00.712 05:31:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.712 05:31:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:00.712 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:00.713 05:31:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.713 05:31:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:00.713 05:31:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:00.713 05:31:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:00.713 05:31:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:00.713 05:31:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:00.713 05:31:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:00.713 05:31:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:00.713 05:31:16 -- nvmf/common.sh@57 -- # uname 00:29:00.713 05:31:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:00.713 05:31:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:00.713 05:31:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:00.713 05:31:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:00.713 05:31:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:00.713 05:31:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:00.713 05:31:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:00.713 05:31:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:00.713 05:31:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:00.713 05:31:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:00.713 05:31:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:00.713 05:31:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:00.713 05:31:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:00.713 05:31:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:00.713 05:31:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:00.713 05:31:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:00.713 05:31:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:00.713 05:31:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:00.713 05:31:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:00.713 05:31:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:00.713 05:31:16 -- nvmf/common.sh@104 -- # continue 2 00:29:00.713 05:31:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:00.713 05:31:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:00.713 05:31:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:00.713 05:31:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:00.713 05:31:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:00.713 05:31:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:00.713 05:31:16 -- nvmf/common.sh@104 -- # continue 2 00:29:00.713 05:31:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:00.713 05:31:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:00.713 05:31:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:00.713 05:31:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:00.713 05:31:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:00.713 05:31:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:00.713 05:31:17 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:00.713 05:31:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:00.713 05:31:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:00.713 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:00.713 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:00.713 altname enp217s0f0np0 00:29:00.713 altname ens818f0np0 00:29:00.713 inet 192.168.100.8/24 scope global mlx_0_0 00:29:00.713 valid_lft forever preferred_lft forever 00:29:00.713 05:31:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:00.713 05:31:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:00.713 05:31:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:00.713 05:31:17 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:00.713 05:31:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:00.713 05:31:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:00.713 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:00.713 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:00.713 altname enp217s0f1np1 00:29:00.713 altname ens818f1np1 00:29:00.713 inet 192.168.100.9/24 scope global mlx_0_1 00:29:00.713 valid_lft forever preferred_lft forever 00:29:00.713 05:31:17 -- nvmf/common.sh@410 -- # return 0 00:29:00.713 05:31:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:00.713 05:31:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:00.713 05:31:17 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:00.713 05:31:17 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:00.713 05:31:17 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:00.713 05:31:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:00.713 05:31:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:00.713 05:31:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:00.713 05:31:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:00.713 05:31:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:00.713 05:31:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:00.713 05:31:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:00.713 05:31:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:00.713 05:31:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:00.713 05:31:17 -- nvmf/common.sh@104 -- # continue 2 00:29:00.713 05:31:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:00.713 05:31:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:00.713 05:31:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:00.713 05:31:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:00.713 05:31:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:00.713 05:31:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:00.713 05:31:17 -- nvmf/common.sh@104 -- # continue 2 00:29:00.713 05:31:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:00.713 05:31:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:00.713 05:31:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:00.713 05:31:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:00.713 05:31:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:00.713 05:31:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:00.713 05:31:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:00.713 05:31:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:00.713 192.168.100.9' 00:29:00.713 05:31:17 -- nvmf/common.sh@445 -- # head -n 1 00:29:00.713 05:31:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:00.713 192.168.100.9' 00:29:00.713 05:31:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:00.713 05:31:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:00.713 192.168.100.9' 00:29:00.713 05:31:17 -- nvmf/common.sh@446 -- # tail -n +2 00:29:00.713 05:31:17 -- nvmf/common.sh@446 -- # head -n 1 00:29:00.713 05:31:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:00.713 05:31:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:00.713 05:31:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:00.713 05:31:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:00.713 05:31:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:00.713 05:31:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:00.713 05:31:17 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:00.713 05:31:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:00.713 05:31:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:00.713 05:31:17 -- common/autotest_common.sh@10 -- # set +x 00:29:00.713 ************************************ 00:29:00.713 START TEST nvmf_target_disconnect_tc1 00:29:00.713 ************************************ 00:29:00.713 05:31:17 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:29:00.713 05:31:17 -- host/target_disconnect.sh@32 -- # set +e 00:29:00.713 05:31:17 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:00.713 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.713 [2024-11-19 05:31:17.264409] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:00.713 [2024-11-19 05:31:17.264453] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:00.713 [2024-11-19 05:31:17.264467] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:29:02.095 [2024-11-19 05:31:18.268341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:02.095 [2024-11-19 05:31:18.268371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:02.095 [2024-11-19 05:31:18.268382] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:29:02.095 [2024-11-19 05:31:18.268408] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:02.095 [2024-11-19 05:31:18.268417] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:02.095 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:29:02.095 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:02.095 Initializing NVMe Controllers 00:29:02.095 05:31:18 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:02.095 05:31:18 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:02.095 05:31:18 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:29:02.095 05:31:18 -- common/autotest_common.sh@1142 -- # return 0 00:29:02.095 05:31:18 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:02.095 05:31:18 -- host/target_disconnect.sh@41 -- # set -e 00:29:02.095 00:29:02.095 real 0m1.130s 00:29:02.095 user 0m0.864s 00:29:02.095 sys 0m0.255s 00:29:02.095 05:31:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:02.095 05:31:18 -- common/autotest_common.sh@10 -- # set +x 00:29:02.095 ************************************ 00:29:02.095 END TEST nvmf_target_disconnect_tc1 00:29:02.095 ************************************ 00:29:02.095 05:31:18 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:02.095 05:31:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:02.095 05:31:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:02.095 05:31:18 -- common/autotest_common.sh@10 -- # set +x 00:29:02.095 ************************************ 00:29:02.095 START TEST nvmf_target_disconnect_tc2 00:29:02.095 ************************************ 00:29:02.095 05:31:18 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:29:02.095 05:31:18 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:29:02.095 05:31:18 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:02.095 05:31:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:02.095 05:31:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:02.095 05:31:18 -- common/autotest_common.sh@10 -- # set +x 00:29:02.095 05:31:18 -- nvmf/common.sh@469 -- # nvmfpid=1969179 00:29:02.095 05:31:18 -- nvmf/common.sh@470 -- # waitforlisten 1969179 00:29:02.095 05:31:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:02.095 05:31:18 -- common/autotest_common.sh@829 -- # '[' -z 1969179 ']' 00:29:02.095 05:31:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.095 05:31:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:02.095 05:31:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.095 05:31:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:02.095 05:31:18 -- common/autotest_common.sh@10 -- # set +x 00:29:02.095 [2024-11-19 05:31:18.381793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:02.095 [2024-11-19 05:31:18.381845] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.095 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.095 [2024-11-19 05:31:18.467735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.095 [2024-11-19 05:31:18.506029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:02.095 [2024-11-19 05:31:18.506135] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.095 [2024-11-19 05:31:18.506144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.095 [2024-11-19 05:31:18.506153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.095 [2024-11-19 05:31:18.506274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:02.095 [2024-11-19 05:31:18.506383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:02.095 [2024-11-19 05:31:18.506491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:02.095 [2024-11-19 05:31:18.506492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:02.664 05:31:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.664 05:31:19 -- common/autotest_common.sh@862 -- # return 0 00:29:02.664 05:31:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:02.664 05:31:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:02.664 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:29:02.923 05:31:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.923 05:31:19 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.923 05:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.923 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:29:02.923 Malloc0 00:29:02.923 05:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.923 05:31:19 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:02.923 05:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.923 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:29:02.923 [2024-11-19 05:31:19.296487] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ca94b0/0x1cb5870) succeed. 00:29:02.923 [2024-11-19 05:31:19.305829] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1caaaa0/0x1d35900) succeed. 00:29:02.923 05:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.923 05:31:19 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.923 05:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.923 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:29:02.923 05:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.923 05:31:19 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.923 05:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.923 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:29:02.923 05:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.923 05:31:19 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:02.923 05:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.923 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:29:02.923 [2024-11-19 05:31:19.451352] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:02.923 05:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.923 05:31:19 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:02.923 05:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.923 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:29:02.923 05:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.923 05:31:19 -- host/target_disconnect.sh@50 -- # reconnectpid=1969415 00:29:02.923 05:31:19 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:02.923 05:31:19 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:03.182 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.089 05:31:21 -- host/target_disconnect.sh@53 -- # kill -9 1969179 00:29:05.089 05:31:21 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Read completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 Write completed with error (sct=0, sc=8) 00:29:06.468 starting I/O failed 00:29:06.468 [2024-11-19 05:31:22.647628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.037 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1969179 Killed "${NVMF_APP[@]}" "$@" 00:29:07.037 05:31:23 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:29:07.037 05:31:23 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:07.037 05:31:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:07.037 05:31:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.037 05:31:23 -- common/autotest_common.sh@10 -- # set +x 00:29:07.037 05:31:23 -- nvmf/common.sh@469 -- # nvmfpid=1970017 00:29:07.037 05:31:23 -- nvmf/common.sh@470 -- # waitforlisten 1970017 00:29:07.037 05:31:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:07.037 05:31:23 -- common/autotest_common.sh@829 -- # '[' -z 1970017 ']' 00:29:07.037 05:31:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.037 05:31:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.037 05:31:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.037 05:31:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.037 05:31:23 -- common/autotest_common.sh@10 -- # set +x 00:29:07.037 [2024-11-19 05:31:23.529693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:07.037 [2024-11-19 05:31:23.529744] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.037 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.297 [2024-11-19 05:31:23.616972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Read completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 Write completed with error (sct=0, sc=8) 00:29:07.297 starting I/O failed 00:29:07.297 [2024-11-19 05:31:23.652676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.297 [2024-11-19 05:31:23.653306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:07.297 [2024-11-19 05:31:23.653404] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.297 [2024-11-19 05:31:23.653413] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.297 [2024-11-19 05:31:23.653422] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.297 [2024-11-19 05:31:23.653556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:07.297 [2024-11-19 05:31:23.653670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:07.297 [2024-11-19 05:31:23.653778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:07.297 [2024-11-19 05:31:23.653780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:07.865 05:31:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.865 05:31:24 -- common/autotest_common.sh@862 -- # return 0 00:29:07.865 05:31:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:07.865 05:31:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:07.865 05:31:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.865 05:31:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.865 05:31:24 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.865 05:31:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.865 05:31:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.865 Malloc0 00:29:07.866 05:31:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.866 05:31:24 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:07.866 05:31:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.866 05:31:24 -- common/autotest_common.sh@10 -- # set +x 00:29:08.124 [2024-11-19 05:31:24.443304] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23ba4b0/0x23c6870) succeed. 00:29:08.124 [2024-11-19 05:31:24.452727] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23bbaa0/0x2446900) succeed. 00:29:08.124 05:31:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.124 05:31:24 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.124 05:31:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.124 05:31:24 -- common/autotest_common.sh@10 -- # set +x 00:29:08.124 05:31:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.124 05:31:24 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.124 05:31:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.124 05:31:24 -- common/autotest_common.sh@10 -- # set +x 00:29:08.124 05:31:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.124 05:31:24 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:08.124 05:31:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.124 05:31:24 -- common/autotest_common.sh@10 -- # set +x 00:29:08.124 [2024-11-19 05:31:24.594766] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:08.124 05:31:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.124 05:31:24 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:08.124 05:31:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.124 05:31:24 -- common/autotest_common.sh@10 -- # set +x 00:29:08.124 05:31:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.124 05:31:24 -- host/target_disconnect.sh@58 -- # wait 1969415 00:29:08.124 Read completed with error (sct=0, sc=8) 00:29:08.124 starting I/O failed 00:29:08.124 Write completed with error (sct=0, sc=8) 00:29:08.124 starting I/O failed 00:29:08.124 Write completed with error (sct=0, sc=8) 00:29:08.124 starting I/O failed 00:29:08.124 Write completed with error (sct=0, sc=8) 00:29:08.124 starting I/O failed 00:29:08.124 Read completed with error (sct=0, sc=8) 00:29:08.124 starting I/O failed 00:29:08.124 Read completed with error (sct=0, sc=8) 00:29:08.124 starting I/O failed 00:29:08.124 Read completed with error (sct=0, sc=8) 00:29:08.124 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Write completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 Read completed with error (sct=0, sc=8) 00:29:08.125 starting I/O failed 00:29:08.125 [2024-11-19 05:31:24.657674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Read completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 Write completed with error (sct=0, sc=8) 00:29:09.506 starting I/O failed 00:29:09.506 [2024-11-19 05:31:25.662765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.506 [2024-11-19 05:31:25.662791] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:09.506 A controller has encountered a failure and is being reset. 00:29:09.506 [2024-11-19 05:31:25.662914] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:09.506 [2024-11-19 05:31:25.694250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:09.506 Controller properly reset. 00:29:13.700 Initializing NVMe Controllers 00:29:13.700 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.700 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.700 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:13.700 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:13.700 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:13.700 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:13.700 Initialization complete. Launching workers. 00:29:13.700 Starting thread on core 1 00:29:13.700 Starting thread on core 2 00:29:13.700 Starting thread on core 3 00:29:13.700 Starting thread on core 0 00:29:13.700 05:31:29 -- host/target_disconnect.sh@59 -- # sync 00:29:13.700 00:29:13.700 real 0m11.407s 00:29:13.700 user 0m39.028s 00:29:13.700 sys 0m1.800s 00:29:13.700 05:31:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.700 05:31:29 -- common/autotest_common.sh@10 -- # set +x 00:29:13.700 ************************************ 00:29:13.700 END TEST nvmf_target_disconnect_tc2 00:29:13.700 ************************************ 00:29:13.700 05:31:29 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:29:13.700 05:31:29 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:29:13.700 05:31:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.700 05:31:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.700 05:31:29 -- common/autotest_common.sh@10 -- # set +x 00:29:13.700 ************************************ 00:29:13.700 START TEST nvmf_target_disconnect_tc3 00:29:13.700 ************************************ 00:29:13.700 05:31:29 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:29:13.700 05:31:29 -- host/target_disconnect.sh@65 -- # reconnectpid=1971130 00:29:13.700 05:31:29 -- host/target_disconnect.sh@67 -- # sleep 2 00:29:13.700 05:31:29 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:29:13.700 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.607 05:31:31 -- host/target_disconnect.sh@68 -- # kill -9 1970017 00:29:15.607 05:31:31 -- host/target_disconnect.sh@70 -- # sleep 2 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Read completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 Write completed with error (sct=0, sc=8) 00:29:16.545 starting I/O failed 00:29:16.545 [2024-11-19 05:31:32.980423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.486 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 1970017 Killed "${NVMF_APP[@]}" "$@" 00:29:17.486 05:31:33 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:29:17.486 05:31:33 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:17.486 05:31:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:17.486 05:31:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.486 05:31:33 -- common/autotest_common.sh@10 -- # set +x 00:29:17.486 05:31:33 -- nvmf/common.sh@469 -- # nvmfpid=1971861 00:29:17.486 05:31:33 -- nvmf/common.sh@470 -- # waitforlisten 1971861 00:29:17.486 05:31:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:17.486 05:31:33 -- common/autotest_common.sh@829 -- # '[' -z 1971861 ']' 00:29:17.486 05:31:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.486 05:31:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.486 05:31:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.486 05:31:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.486 05:31:33 -- common/autotest_common.sh@10 -- # set +x 00:29:17.486 [2024-11-19 05:31:33.856159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:17.486 [2024-11-19 05:31:33.856216] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.486 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.486 [2024-11-19 05:31:33.943186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.486 [2024-11-19 05:31:33.979682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:17.486 [2024-11-19 05:31:33.979796] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.486 [2024-11-19 05:31:33.979807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.486 [2024-11-19 05:31:33.979815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.486 [2024-11-19 05:31:33.979937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:17.486 [2024-11-19 05:31:33.980048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:17.486 [2024-11-19 05:31:33.980159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:17.486 [2024-11-19 05:31:33.980161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Write completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Write completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Write completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Write completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.486 Read completed with error (sct=0, sc=8) 00:29:17.486 starting I/O failed 00:29:17.487 Write completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Read completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Write completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Write completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Write completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Write completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Write completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Read completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Read completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Read completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Read completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 Read completed with error (sct=0, sc=8) 00:29:17.487 starting I/O failed 00:29:17.487 [2024-11-19 05:31:33.985387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.425 05:31:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.425 05:31:34 -- common/autotest_common.sh@862 -- # return 0 00:29:18.425 05:31:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:18.425 05:31:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:18.425 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.425 05:31:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.425 05:31:34 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.425 05:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.425 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.425 Malloc0 00:29:18.425 05:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.425 05:31:34 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:18.425 05:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.425 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.425 [2024-11-19 05:31:34.764026] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18db4b0/0x18e7870) succeed. 00:29:18.425 [2024-11-19 05:31:34.773550] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18dcaa0/0x1967900) succeed. 00:29:18.425 05:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.425 05:31:34 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.425 05:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.425 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.425 05:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.425 05:31:34 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.425 05:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.425 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.426 05:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.426 05:31:34 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:29:18.426 05:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.426 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.426 [2024-11-19 05:31:34.916403] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:29:18.426 05:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.426 05:31:34 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:29:18.426 05:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.426 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.426 05:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.426 05:31:34 -- host/target_disconnect.sh@73 -- # wait 1971130 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Read completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 Write completed with error (sct=0, sc=8) 00:29:18.684 starting I/O failed 00:29:18.684 [2024-11-19 05:31:34.990391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 [2024-11-19 05:31:34.992051] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:18.684 [2024-11-19 05:31:34.992073] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:18.684 [2024-11-19 05:31:34.992089] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.621 [2024-11-19 05:31:35.995911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-11-19 05:31:35.997341] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:19.621 [2024-11-19 05:31:35.997357] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:19.621 [2024-11-19 05:31:35.997365] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.559 [2024-11-19 05:31:37.001067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.559 qpair failed and we were unable to recover it. 00:29:20.559 [2024-11-19 05:31:37.002557] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:20.559 [2024-11-19 05:31:37.002574] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:20.559 [2024-11-19 05:31:37.002582] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.497 [2024-11-19 05:31:38.006465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.497 qpair failed and we were unable to recover it. 00:29:21.497 [2024-11-19 05:31:38.007880] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:21.497 [2024-11-19 05:31:38.007898] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:21.497 [2024-11-19 05:31:38.007906] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.875 [2024-11-19 05:31:39.011811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.875 qpair failed and we were unable to recover it. 00:29:22.875 [2024-11-19 05:31:39.013362] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:22.875 [2024-11-19 05:31:39.013379] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:22.875 [2024-11-19 05:31:39.013387] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.812 [2024-11-19 05:31:40.017214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.812 qpair failed and we were unable to recover it. 00:29:23.812 [2024-11-19 05:31:40.018569] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:23.812 [2024-11-19 05:31:40.018589] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:23.812 [2024-11-19 05:31:40.018597] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:24.750 [2024-11-19 05:31:41.022488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-11-19 05:31:41.023888] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:24.750 [2024-11-19 05:31:41.023906] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:24.750 [2024-11-19 05:31:41.023914] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:25.730 [2024-11-19 05:31:42.027635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.730 qpair failed and we were unable to recover it. 00:29:25.730 [2024-11-19 05:31:42.029224] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:25.730 [2024-11-19 05:31:42.029248] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:25.730 [2024-11-19 05:31:42.029256] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:26.723 [2024-11-19 05:31:43.032888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.723 qpair failed and we were unable to recover it. 00:29:26.723 [2024-11-19 05:31:43.034392] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.723 [2024-11-19 05:31:43.034409] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.723 [2024-11-19 05:31:43.034417] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:27.660 [2024-11-19 05:31:44.038109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.660 qpair failed and we were unable to recover it. 00:29:27.660 [2024-11-19 05:31:44.038235] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:27.660 A controller has encountered a failure and is being reset. 00:29:27.660 Resorting to new failover address 192.168.100.9 00:29:27.660 [2024-11-19 05:31:44.039844] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:27.660 [2024-11-19 05:31:44.039872] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:27.660 [2024-11-19 05:31:44.039883] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:28.598 [2024-11-19 05:31:45.043632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:28.598 qpair failed and we were unable to recover it. 00:29:28.598 [2024-11-19 05:31:45.045030] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:28.598 [2024-11-19 05:31:45.045047] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:28.598 [2024-11-19 05:31:45.045055] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:29.535 [2024-11-19 05:31:46.048743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-19 05:31:46.048864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.535 [2024-11-19 05:31:46.048972] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:29.535 [2024-11-19 05:31:46.050831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:29.535 Controller properly reset. 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Write completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 Read completed with error (sct=0, sc=8) 00:29:30.914 starting I/O failed 00:29:30.914 [2024-11-19 05:31:47.097709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.914 Initializing NVMe Controllers 00:29:30.914 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.914 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.914 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:30.914 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:30.914 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:30.914 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:30.914 Initialization complete. Launching workers. 00:29:30.914 Starting thread on core 1 00:29:30.914 Starting thread on core 2 00:29:30.914 Starting thread on core 3 00:29:30.914 Starting thread on core 0 00:29:30.914 05:31:47 -- host/target_disconnect.sh@74 -- # sync 00:29:30.914 00:29:30.914 real 0m17.357s 00:29:30.914 user 0m56.149s 00:29:30.914 sys 0m5.175s 00:29:30.914 05:31:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:30.914 05:31:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.914 ************************************ 00:29:30.914 END TEST nvmf_target_disconnect_tc3 00:29:30.914 ************************************ 00:29:30.914 05:31:47 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:30.914 05:31:47 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:30.914 05:31:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:30.914 05:31:47 -- nvmf/common.sh@116 -- # sync 00:29:30.914 05:31:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:30.915 05:31:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:30.915 05:31:47 -- nvmf/common.sh@119 -- # set +e 00:29:30.915 05:31:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:30.915 05:31:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:30.915 rmmod nvme_rdma 00:29:30.915 rmmod nvme_fabrics 00:29:30.915 05:31:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:30.915 05:31:47 -- nvmf/common.sh@123 -- # set -e 00:29:30.915 05:31:47 -- nvmf/common.sh@124 -- # return 0 00:29:30.915 05:31:47 -- nvmf/common.sh@477 -- # '[' -n 1971861 ']' 00:29:30.915 05:31:47 -- nvmf/common.sh@478 -- # killprocess 1971861 00:29:30.915 05:31:47 -- common/autotest_common.sh@936 -- # '[' -z 1971861 ']' 00:29:30.915 05:31:47 -- common/autotest_common.sh@940 -- # kill -0 1971861 00:29:30.915 05:31:47 -- common/autotest_common.sh@941 -- # uname 00:29:30.915 05:31:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:30.915 05:31:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1971861 00:29:30.915 05:31:47 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:29:30.915 05:31:47 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:29:30.915 05:31:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1971861' 00:29:30.915 killing process with pid 1971861 00:29:30.915 05:31:47 -- common/autotest_common.sh@955 -- # kill 1971861 00:29:30.915 05:31:47 -- common/autotest_common.sh@960 -- # wait 1971861 00:29:31.175 05:31:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:31.175 05:31:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:31.175 00:29:31.175 real 0m37.426s 00:29:31.175 user 2m31.974s 00:29:31.175 sys 0m12.841s 00:29:31.175 05:31:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:31.175 05:31:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.175 ************************************ 00:29:31.175 END TEST nvmf_target_disconnect 00:29:31.175 ************************************ 00:29:31.175 05:31:47 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:31.175 05:31:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:31.175 05:31:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.175 05:31:47 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:31.175 00:29:31.175 real 21m14.443s 00:29:31.175 user 68m6.087s 00:29:31.175 sys 4m58.416s 00:29:31.175 05:31:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:31.175 05:31:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.175 ************************************ 00:29:31.175 END TEST nvmf_rdma 00:29:31.175 ************************************ 00:29:31.175 05:31:47 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:31.175 05:31:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:31.175 05:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:31.175 05:31:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.175 ************************************ 00:29:31.175 START TEST spdkcli_nvmf_rdma 00:29:31.175 ************************************ 00:29:31.175 05:31:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:31.435 * Looking for test storage... 00:29:31.435 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:31.435 05:31:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:31.435 05:31:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:31.435 05:31:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:31.435 05:31:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:31.435 05:31:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:31.435 05:31:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:31.435 05:31:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:31.435 05:31:47 -- scripts/common.sh@335 -- # IFS=.-: 00:29:31.435 05:31:47 -- scripts/common.sh@335 -- # read -ra ver1 00:29:31.435 05:31:47 -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.435 05:31:47 -- scripts/common.sh@336 -- # read -ra ver2 00:29:31.435 05:31:47 -- scripts/common.sh@337 -- # local 'op=<' 00:29:31.435 05:31:47 -- scripts/common.sh@339 -- # ver1_l=2 00:29:31.435 05:31:47 -- scripts/common.sh@340 -- # ver2_l=1 00:29:31.435 05:31:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:31.435 05:31:47 -- scripts/common.sh@343 -- # case "$op" in 00:29:31.435 05:31:47 -- scripts/common.sh@344 -- # : 1 00:29:31.435 05:31:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:31.435 05:31:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.435 05:31:47 -- scripts/common.sh@364 -- # decimal 1 00:29:31.435 05:31:47 -- scripts/common.sh@352 -- # local d=1 00:29:31.435 05:31:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.435 05:31:47 -- scripts/common.sh@354 -- # echo 1 00:29:31.435 05:31:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:31.435 05:31:47 -- scripts/common.sh@365 -- # decimal 2 00:29:31.435 05:31:47 -- scripts/common.sh@352 -- # local d=2 00:29:31.435 05:31:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.435 05:31:47 -- scripts/common.sh@354 -- # echo 2 00:29:31.435 05:31:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:31.435 05:31:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:31.435 05:31:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:31.435 05:31:47 -- scripts/common.sh@367 -- # return 0 00:29:31.435 05:31:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.435 05:31:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:31.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.435 --rc genhtml_branch_coverage=1 00:29:31.435 --rc genhtml_function_coverage=1 00:29:31.435 --rc genhtml_legend=1 00:29:31.435 --rc geninfo_all_blocks=1 00:29:31.435 --rc geninfo_unexecuted_blocks=1 00:29:31.435 00:29:31.435 ' 00:29:31.435 05:31:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:31.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.435 --rc genhtml_branch_coverage=1 00:29:31.435 --rc genhtml_function_coverage=1 00:29:31.435 --rc genhtml_legend=1 00:29:31.435 --rc geninfo_all_blocks=1 00:29:31.435 --rc geninfo_unexecuted_blocks=1 00:29:31.435 00:29:31.435 ' 00:29:31.435 05:31:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:31.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.435 --rc genhtml_branch_coverage=1 00:29:31.435 --rc genhtml_function_coverage=1 00:29:31.435 --rc genhtml_legend=1 00:29:31.435 --rc geninfo_all_blocks=1 00:29:31.435 --rc geninfo_unexecuted_blocks=1 00:29:31.435 00:29:31.435 ' 00:29:31.435 05:31:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:31.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.435 --rc genhtml_branch_coverage=1 00:29:31.435 --rc genhtml_function_coverage=1 00:29:31.435 --rc genhtml_legend=1 00:29:31.435 --rc geninfo_all_blocks=1 00:29:31.435 --rc geninfo_unexecuted_blocks=1 00:29:31.435 00:29:31.435 ' 00:29:31.435 05:31:47 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:31.435 05:31:47 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:31.435 05:31:47 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:31.435 05:31:47 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.435 05:31:47 -- nvmf/common.sh@7 -- # uname -s 00:29:31.435 05:31:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.435 05:31:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.435 05:31:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.435 05:31:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.435 05:31:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.435 05:31:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.435 05:31:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.435 05:31:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.435 05:31:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.435 05:31:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.435 05:31:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:31.435 05:31:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:31.435 05:31:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.435 05:31:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.435 05:31:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.435 05:31:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:31.435 05:31:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.435 05:31:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.435 05:31:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.435 05:31:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.435 05:31:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.435 05:31:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.435 05:31:47 -- paths/export.sh@5 -- # export PATH 00:29:31.435 05:31:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.435 05:31:47 -- nvmf/common.sh@46 -- # : 0 00:29:31.435 05:31:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:31.435 05:31:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:31.435 05:31:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:31.435 05:31:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.435 05:31:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.435 05:31:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:31.435 05:31:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:31.435 05:31:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:31.435 05:31:47 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:31.435 05:31:47 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:31.435 05:31:47 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:31.435 05:31:47 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:31.435 05:31:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:31.435 05:31:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.435 05:31:47 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:31.435 05:31:47 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1974257 00:29:31.435 05:31:47 -- spdkcli/common.sh@34 -- # waitforlisten 1974257 00:29:31.436 05:31:47 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:31.436 05:31:47 -- common/autotest_common.sh@829 -- # '[' -z 1974257 ']' 00:29:31.436 05:31:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.436 05:31:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.436 05:31:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.436 05:31:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.436 05:31:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.436 [2024-11-19 05:31:47.991182] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:31.436 [2024-11-19 05:31:47.991236] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974257 ] 00:29:31.695 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.695 [2024-11-19 05:31:48.062500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:31.695 [2024-11-19 05:31:48.100521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:31.695 [2024-11-19 05:31:48.100697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.695 [2024-11-19 05:31:48.100700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.263 05:31:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.263 05:31:48 -- common/autotest_common.sh@862 -- # return 0 00:29:32.263 05:31:48 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:32.263 05:31:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:32.263 05:31:48 -- common/autotest_common.sh@10 -- # set +x 00:29:32.523 05:31:48 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:32.523 05:31:48 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:32.523 05:31:48 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:32.523 05:31:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:32.523 05:31:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.523 05:31:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:32.523 05:31:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:32.523 05:31:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:32.523 05:31:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.523 05:31:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:32.523 05:31:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.523 05:31:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:32.523 05:31:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:32.523 05:31:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:32.523 05:31:48 -- common/autotest_common.sh@10 -- # set +x 00:29:39.105 05:31:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:39.105 05:31:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:39.105 05:31:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:39.105 05:31:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:39.105 05:31:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:39.105 05:31:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:39.105 05:31:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:39.105 05:31:55 -- nvmf/common.sh@294 -- # net_devs=() 00:29:39.105 05:31:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:39.105 05:31:55 -- nvmf/common.sh@295 -- # e810=() 00:29:39.105 05:31:55 -- nvmf/common.sh@295 -- # local -ga e810 00:29:39.105 05:31:55 -- nvmf/common.sh@296 -- # x722=() 00:29:39.105 05:31:55 -- nvmf/common.sh@296 -- # local -ga x722 00:29:39.105 05:31:55 -- nvmf/common.sh@297 -- # mlx=() 00:29:39.105 05:31:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:39.105 05:31:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.105 05:31:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:39.105 05:31:55 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:39.105 05:31:55 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:39.105 05:31:55 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:39.105 05:31:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:39.105 05:31:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:39.105 05:31:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:39.105 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:39.105 05:31:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:39.105 05:31:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:39.105 05:31:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:39.105 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:39.105 05:31:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:39.105 05:31:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:39.105 05:31:55 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:39.105 05:31:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.105 05:31:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:39.105 05:31:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.105 05:31:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:39.105 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:39.105 05:31:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.105 05:31:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:39.105 05:31:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.105 05:31:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:39.105 05:31:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.105 05:31:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:39.105 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:39.105 05:31:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.105 05:31:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:39.105 05:31:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:39.105 05:31:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:39.105 05:31:55 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:39.106 05:31:55 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:39.106 05:31:55 -- nvmf/common.sh@57 -- # uname 00:29:39.106 05:31:55 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:39.106 05:31:55 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:39.106 05:31:55 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:39.106 05:31:55 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:39.106 05:31:55 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:39.106 05:31:55 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:39.106 05:31:55 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:39.106 05:31:55 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:39.106 05:31:55 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:39.106 05:31:55 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:39.106 05:31:55 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:39.106 05:31:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:39.106 05:31:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:39.106 05:31:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:39.106 05:31:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:39.106 05:31:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:39.106 05:31:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@104 -- # continue 2 00:29:39.106 05:31:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@104 -- # continue 2 00:29:39.106 05:31:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:39.106 05:31:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:39.106 05:31:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:39.106 05:31:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:39.106 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:39.106 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:39.106 altname enp217s0f0np0 00:29:39.106 altname ens818f0np0 00:29:39.106 inet 192.168.100.8/24 scope global mlx_0_0 00:29:39.106 valid_lft forever preferred_lft forever 00:29:39.106 05:31:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:39.106 05:31:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:39.106 05:31:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:39.106 05:31:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:39.106 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:39.106 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:39.106 altname enp217s0f1np1 00:29:39.106 altname ens818f1np1 00:29:39.106 inet 192.168.100.9/24 scope global mlx_0_1 00:29:39.106 valid_lft forever preferred_lft forever 00:29:39.106 05:31:55 -- nvmf/common.sh@410 -- # return 0 00:29:39.106 05:31:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:39.106 05:31:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:39.106 05:31:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:39.106 05:31:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:39.106 05:31:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:39.106 05:31:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:39.106 05:31:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:39.106 05:31:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:39.106 05:31:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:39.106 05:31:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@104 -- # continue 2 00:29:39.106 05:31:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.106 05:31:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:39.106 05:31:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@104 -- # continue 2 00:29:39.106 05:31:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:39.106 05:31:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:39.106 05:31:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:39.106 05:31:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:39.106 05:31:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:39.106 05:31:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:39.106 192.168.100.9' 00:29:39.106 05:31:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:39.106 192.168.100.9' 00:29:39.106 05:31:55 -- nvmf/common.sh@445 -- # head -n 1 00:29:39.106 05:31:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:39.106 05:31:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:39.106 192.168.100.9' 00:29:39.106 05:31:55 -- nvmf/common.sh@446 -- # tail -n +2 00:29:39.106 05:31:55 -- nvmf/common.sh@446 -- # head -n 1 00:29:39.106 05:31:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:39.106 05:31:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:39.106 05:31:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:39.106 05:31:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:39.106 05:31:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:39.106 05:31:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:39.106 05:31:55 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:39.106 05:31:55 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:39.106 05:31:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:39.106 05:31:55 -- common/autotest_common.sh@10 -- # set +x 00:29:39.106 05:31:55 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:39.106 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:39.106 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:39.106 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:39.106 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:39.106 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:39.106 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:39.106 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:39.106 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:39.106 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:39.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:39.107 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:39.107 ' 00:29:39.675 [2024-11-19 05:31:56.001689] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:41.587 [2024-11-19 05:31:58.068631] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1afedf0/0x1b020f0) succeed. 00:29:41.587 [2024-11-19 05:31:58.078572] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b004d0/0x1b43790) succeed. 00:29:42.966 [2024-11-19 05:31:59.319888] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:45.501 [2024-11-19 05:32:01.511108] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:46.880 [2024-11-19 05:32:03.397367] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:48.787 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:48.787 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:48.787 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:48.787 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:48.787 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:48.787 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:48.788 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:48.788 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:48.788 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:48.788 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:48.788 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:48.788 05:32:04 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:48.788 05:32:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:48.788 05:32:04 -- common/autotest_common.sh@10 -- # set +x 00:29:48.788 05:32:05 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:48.788 05:32:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:48.788 05:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:48.788 05:32:05 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:48.788 05:32:05 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:49.047 05:32:05 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:49.047 05:32:05 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:49.047 05:32:05 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:49.047 05:32:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.047 05:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:49.047 05:32:05 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:49.047 05:32:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.047 05:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:49.047 05:32:05 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:49.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:49.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:49.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:49.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:49.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:49.047 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:49.047 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:49.047 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:49.047 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:49.047 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:49.047 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:49.047 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:49.047 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:49.047 ' 00:29:54.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:54.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:54.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:54.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:54.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:29:54.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:29:54.325 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:54.325 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:54.325 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:54.325 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:54.325 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:54.325 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:54.325 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:54.325 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:54.325 05:32:10 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:54.325 05:32:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:54.325 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:54.325 05:32:10 -- spdkcli/nvmf.sh@90 -- # killprocess 1974257 00:29:54.325 05:32:10 -- common/autotest_common.sh@936 -- # '[' -z 1974257 ']' 00:29:54.325 05:32:10 -- common/autotest_common.sh@940 -- # kill -0 1974257 00:29:54.325 05:32:10 -- common/autotest_common.sh@941 -- # uname 00:29:54.325 05:32:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:54.325 05:32:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1974257 00:29:54.325 05:32:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:54.325 05:32:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:54.325 05:32:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1974257' 00:29:54.325 killing process with pid 1974257 00:29:54.325 05:32:10 -- common/autotest_common.sh@955 -- # kill 1974257 00:29:54.325 [2024-11-19 05:32:10.579788] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:54.325 05:32:10 -- common/autotest_common.sh@960 -- # wait 1974257 00:29:54.325 05:32:10 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:29:54.325 05:32:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:54.325 05:32:10 -- nvmf/common.sh@116 -- # sync 00:29:54.325 05:32:10 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:54.325 05:32:10 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:54.325 05:32:10 -- nvmf/common.sh@119 -- # set +e 00:29:54.325 05:32:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:54.325 05:32:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:54.325 rmmod nvme_rdma 00:29:54.325 rmmod nvme_fabrics 00:29:54.325 05:32:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:54.325 05:32:10 -- nvmf/common.sh@123 -- # set -e 00:29:54.325 05:32:10 -- nvmf/common.sh@124 -- # return 0 00:29:54.325 05:32:10 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:29:54.325 05:32:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:54.325 05:32:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:54.325 00:29:54.325 real 0m23.155s 00:29:54.325 user 0m49.085s 00:29:54.325 sys 0m6.082s 00:29:54.325 05:32:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:54.325 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:54.325 ************************************ 00:29:54.325 END TEST spdkcli_nvmf_rdma 00:29:54.325 ************************************ 00:29:54.585 05:32:10 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:54.585 05:32:10 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:54.585 05:32:10 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:54.585 05:32:10 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:54.585 05:32:10 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:54.585 05:32:10 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:29:54.585 05:32:10 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:29:54.585 05:32:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:54.585 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:54.585 05:32:10 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:29:54.585 05:32:10 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:29:54.585 05:32:10 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:29:54.585 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:30:01.158 INFO: APP EXITING 00:30:01.158 INFO: killing all VMs 00:30:01.158 INFO: killing vhost app 00:30:01.158 INFO: EXIT DONE 00:30:03.753 Waiting for block devices as requested 00:30:03.753 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:03.753 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:03.753 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:03.753 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:03.753 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:03.753 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:04.044 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:04.044 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:04.044 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:04.044 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:04.303 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:04.303 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:04.303 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:04.563 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:04.563 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:04.563 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:04.823 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:08.115 Cleaning 00:30:08.115 Removing: /var/run/dpdk/spdk0/config 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:08.115 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:08.115 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:08.115 Removing: /var/run/dpdk/spdk1/config 00:30:08.115 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:08.115 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:08.115 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:08.115 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:08.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:08.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:08.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:08.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:08.376 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:08.376 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:08.376 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:08.376 Removing: /var/run/dpdk/spdk2/config 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:08.376 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:08.376 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:08.376 Removing: /var/run/dpdk/spdk3/config 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:08.376 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:08.376 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:08.376 Removing: /var/run/dpdk/spdk4/config 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:08.376 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:08.376 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:08.376 Removing: /dev/shm/bdevperf_trace.pid1803433 00:30:08.376 Removing: /dev/shm/bdevperf_trace.pid1897956 00:30:08.376 Removing: /dev/shm/bdev_svc_trace.1 00:30:08.376 Removing: /dev/shm/nvmf_trace.0 00:30:08.376 Removing: /dev/shm/spdk_tgt_trace.pid1639019 00:30:08.376 Removing: /var/run/dpdk/spdk0 00:30:08.376 Removing: /var/run/dpdk/spdk1 00:30:08.376 Removing: /var/run/dpdk/spdk2 00:30:08.376 Removing: /var/run/dpdk/spdk3 00:30:08.376 Removing: /var/run/dpdk/spdk4 00:30:08.376 Removing: /var/run/dpdk/spdk_pid1636288 00:30:08.376 Removing: /var/run/dpdk/spdk_pid1637577 00:30:08.376 Removing: /var/run/dpdk/spdk_pid1639019 00:30:08.376 Removing: /var/run/dpdk/spdk_pid1639645 00:30:08.376 Removing: /var/run/dpdk/spdk_pid1644722 00:30:08.376 Removing: /var/run/dpdk/spdk_pid1646207 00:30:08.376 Removing: /var/run/dpdk/spdk_pid1646540 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1646914 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1647344 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1647713 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1647865 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1648118 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1648439 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1649296 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1653045 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1653340 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1653640 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1653809 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1654480 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1654502 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1655074 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1655326 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1655590 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1655655 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1655947 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1656082 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1656595 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1656879 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1657217 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1657496 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1657546 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1657605 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1657871 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1658158 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1658431 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1658697 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1658842 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1659039 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1659288 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1659573 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1659842 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1660130 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1660357 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1660539 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1660709 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1660990 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1661263 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1661544 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1661816 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1662058 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1662205 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1662410 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1662678 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1662964 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1663230 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1663519 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1663749 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1663936 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1664096 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1664380 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1664648 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1664929 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1665203 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1665454 00:30:08.636 Removing: /var/run/dpdk/spdk_pid1665608 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1665820 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1666072 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1666358 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1666624 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1666913 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1667181 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1667467 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1667540 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1667879 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1671884 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1769411 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1773640 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1784351 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1789650 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1793379 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1794197 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1803433 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1803757 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1808031 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1813947 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1816716 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1827529 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1852373 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1856054 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1861198 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1895754 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1896717 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1897956 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1902235 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1909335 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1910372 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1911293 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1912807 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1913087 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1917631 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1917633 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1922215 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1922769 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1923422 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1924103 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1924205 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1926639 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1928629 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1930554 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1932470 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1934349 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1936234 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1942434 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1943091 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1945402 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1946622 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1954253 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1956971 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1962715 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1962985 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1968888 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1969415 00:30:08.896 Removing: /var/run/dpdk/spdk_pid1971130 00:30:09.156 Removing: /var/run/dpdk/spdk_pid1974257 00:30:09.156 Clean 00:30:09.156 killing process with pid 1586441 00:30:27.255 killing process with pid 1586438 00:30:27.255 killing process with pid 1586440 00:30:27.255 killing process with pid 1586439 00:30:27.255 05:32:41 -- common/autotest_common.sh@1446 -- # return 0 00:30:27.255 05:32:41 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:27.255 05:32:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.255 05:32:41 -- common/autotest_common.sh@10 -- # set +x 00:30:27.255 05:32:41 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:27.255 05:32:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.255 05:32:41 -- common/autotest_common.sh@10 -- # set +x 00:30:27.255 05:32:41 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:27.255 05:32:41 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:27.255 05:32:41 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:27.255 05:32:41 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:27.255 05:32:41 -- spdk/autotest.sh@383 -- # hostname 00:30:27.255 05:32:41 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:27.255 geninfo: WARNING: invalid characters removed from testname! 00:30:45.352 05:32:59 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:45.611 05:33:02 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:47.518 05:33:03 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:48.898 05:33:05 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:50.276 05:33:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:52.183 05:33:08 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:53.574 05:33:09 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:53.574 05:33:09 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:30:53.574 05:33:09 -- common/autotest_common.sh@1690 -- $ lcov --version 00:30:53.574 05:33:09 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:30:53.574 05:33:09 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:30:53.574 05:33:09 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:30:53.574 05:33:09 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:30:53.574 05:33:09 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:30:53.574 05:33:09 -- scripts/common.sh@335 -- $ IFS=.-: 00:30:53.574 05:33:09 -- scripts/common.sh@335 -- $ read -ra ver1 00:30:53.574 05:33:09 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:53.574 05:33:09 -- scripts/common.sh@336 -- $ read -ra ver2 00:30:53.574 05:33:09 -- scripts/common.sh@337 -- $ local 'op=<' 00:30:53.574 05:33:09 -- scripts/common.sh@339 -- $ ver1_l=2 00:30:53.574 05:33:09 -- scripts/common.sh@340 -- $ ver2_l=1 00:30:53.574 05:33:09 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:30:53.574 05:33:09 -- scripts/common.sh@343 -- $ case "$op" in 00:30:53.574 05:33:09 -- scripts/common.sh@344 -- $ : 1 00:30:53.574 05:33:09 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:30:53.574 05:33:09 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.574 05:33:09 -- scripts/common.sh@364 -- $ decimal 1 00:30:53.574 05:33:09 -- scripts/common.sh@352 -- $ local d=1 00:30:53.574 05:33:09 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:53.574 05:33:09 -- scripts/common.sh@354 -- $ echo 1 00:30:53.574 05:33:09 -- scripts/common.sh@364 -- $ ver1[v]=1 00:30:53.574 05:33:09 -- scripts/common.sh@365 -- $ decimal 2 00:30:53.574 05:33:09 -- scripts/common.sh@352 -- $ local d=2 00:30:53.574 05:33:09 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:53.574 05:33:09 -- scripts/common.sh@354 -- $ echo 2 00:30:53.574 05:33:09 -- scripts/common.sh@365 -- $ ver2[v]=2 00:30:53.574 05:33:09 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:30:53.574 05:33:09 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:30:53.574 05:33:09 -- scripts/common.sh@367 -- $ return 0 00:30:53.574 05:33:09 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.574 05:33:09 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:30:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.574 --rc genhtml_branch_coverage=1 00:30:53.574 --rc genhtml_function_coverage=1 00:30:53.574 --rc genhtml_legend=1 00:30:53.574 --rc geninfo_all_blocks=1 00:30:53.574 --rc geninfo_unexecuted_blocks=1 00:30:53.574 00:30:53.574 ' 00:30:53.574 05:33:09 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:30:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.574 --rc genhtml_branch_coverage=1 00:30:53.574 --rc genhtml_function_coverage=1 00:30:53.574 --rc genhtml_legend=1 00:30:53.574 --rc geninfo_all_blocks=1 00:30:53.574 --rc geninfo_unexecuted_blocks=1 00:30:53.574 00:30:53.574 ' 00:30:53.574 05:33:09 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:30:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.574 --rc genhtml_branch_coverage=1 00:30:53.574 --rc genhtml_function_coverage=1 00:30:53.574 --rc genhtml_legend=1 00:30:53.574 --rc geninfo_all_blocks=1 00:30:53.574 --rc geninfo_unexecuted_blocks=1 00:30:53.574 00:30:53.574 ' 00:30:53.574 05:33:09 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:30:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.574 --rc genhtml_branch_coverage=1 00:30:53.574 --rc genhtml_function_coverage=1 00:30:53.574 --rc genhtml_legend=1 00:30:53.574 --rc geninfo_all_blocks=1 00:30:53.574 --rc geninfo_unexecuted_blocks=1 00:30:53.574 00:30:53.574 ' 00:30:53.574 05:33:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:53.574 05:33:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:53.574 05:33:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.574 05:33:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.574 05:33:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.574 05:33:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.575 05:33:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.575 05:33:09 -- paths/export.sh@5 -- $ export PATH 00:30:53.575 05:33:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.575 05:33:09 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:30:53.575 05:33:09 -- common/autobuild_common.sh@440 -- $ date +%s 00:30:53.575 05:33:09 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731990789.XXXXXX 00:30:53.575 05:33:09 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731990789.q302YK 00:30:53.575 05:33:09 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:30:53.575 05:33:09 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:30:53.575 05:33:09 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:30:53.575 05:33:09 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:30:53.575 05:33:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:53.575 05:33:09 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:53.575 05:33:09 -- common/autobuild_common.sh@456 -- $ get_config_params 00:30:53.575 05:33:09 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:30:53.575 05:33:09 -- common/autotest_common.sh@10 -- $ set +x 00:30:53.575 05:33:09 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:30:53.575 05:33:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:53.575 05:33:09 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:53.575 05:33:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:53.575 05:33:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:53.575 05:33:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:53.575 05:33:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:53.575 05:33:09 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:53.575 05:33:09 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:53.575 05:33:09 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:53.575 05:33:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:53.575 + [[ -n 1532384 ]] 00:30:53.575 + sudo kill 1532384 00:30:53.585 [Pipeline] } 00:30:53.600 [Pipeline] // stage 00:30:53.606 [Pipeline] } 00:30:53.621 [Pipeline] // timeout 00:30:53.627 [Pipeline] } 00:30:53.641 [Pipeline] // catchError 00:30:53.647 [Pipeline] } 00:30:53.664 [Pipeline] // wrap 00:30:53.671 [Pipeline] } 00:30:53.685 [Pipeline] // catchError 00:30:53.695 [Pipeline] stage 00:30:53.698 [Pipeline] { (Epilogue) 00:30:53.712 [Pipeline] catchError 00:30:53.714 [Pipeline] { 00:30:53.729 [Pipeline] echo 00:30:53.731 Cleanup processes 00:30:53.738 [Pipeline] sh 00:30:54.031 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:54.031 1996154 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:54.047 [Pipeline] sh 00:30:54.337 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:54.337 ++ grep -v 'sudo pgrep' 00:30:54.337 ++ awk '{print $1}' 00:30:54.337 + sudo kill -9 00:30:54.337 + true 00:30:54.350 [Pipeline] sh 00:30:54.637 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:54.637 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:31:01.284 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:31:03.832 [Pipeline] sh 00:31:04.119 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:04.119 Artifacts sizes are good 00:31:04.134 [Pipeline] archiveArtifacts 00:31:04.142 Archiving artifacts 00:31:04.286 [Pipeline] sh 00:31:04.583 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:31:04.598 [Pipeline] cleanWs 00:31:04.609 [WS-CLEANUP] Deleting project workspace... 00:31:04.609 [WS-CLEANUP] Deferred wipeout is used... 00:31:04.616 [WS-CLEANUP] done 00:31:04.618 [Pipeline] } 00:31:04.635 [Pipeline] // catchError 00:31:04.646 [Pipeline] sh 00:31:04.934 + logger -p user.info -t JENKINS-CI 00:31:04.944 [Pipeline] } 00:31:04.956 [Pipeline] // stage 00:31:04.961 [Pipeline] } 00:31:04.975 [Pipeline] // node 00:31:04.980 [Pipeline] End of Pipeline 00:31:05.021 Finished: SUCCESS